专利摘要:
three-dimensional object detection device. The present invention relates to a three-dimensional object detection device which is characterized in that it is provided with an image capture device (10) for capturing an image of an area behind a host vehicle, an detection area (34) for setting predetermined detection areas in a lateral direction behind the host vehicle, an image conversion device (31) for converting the captured image's viewpoint to create a bird's eye view image, a three-dimensional object sensing device (32, 33) for generating difference waveform information by counting a series of pixels that indicate a predetermined difference in a difference image at which the positions of the bird's eye view images at different times were aligned in terms of bird's eye view, and forming a frequency distribution, and to detect three-dimensional objects in the detection areas with based on difference waveform information; and a relative movement speed calculating device (33) for calculating the relative movement speed of the three-dimensional object with respect to the host vehicle based on the difference waveform information, the detection area adjustment unit (34) which enlarges the detection areas backwards with respect to the direction of advance of the vehicle when the three-dimensional object has been detected in the detection areas al, a2 and the relative movement speed of the three-dimensional object is a predetermined value or greater.
公开号:BR112014020316B1
申请号:R112014020316-4
申请日:2013-02-13
公开日:2021-09-14
发明作者:Osamu Fukata;Yasuhisa Hayakawa
申请人:Nissan Motor Co., Ltd;
IPC主号:
专利说明:

FIELD OF TECHNIQUE
[001] The present invention relates to a three-dimensional object detection unit. This application claims priority right based on Japanese Patent Application No. 12-037482 filed on February 23, 2012, and in designated states that accept incorporation of a document by reference, the contents described in the above application are incorporated into the present by reference and are considered part of the description of the present application. FUNDAMENTALS OF THE INVENTION
[002] In a conventionally known technique, two captured images, captured at different times, are converted to an aerial view image, and an obstacle is detected based on the differences in the two converted aerial view images (see patent document 1) . Prior Art Documents Patent Documents Patent Document 1: Japanese Patent Application Publicly Available No. 2008-227646. DESCRIPTION OF THE INVENTION Problems to be Solved by the Invention
[003] When a three-dimensional object must be detected in a detection area based on a captured image in which the rear of a host vehicle was captured, there are cases in which two adjacent vehicles (three-dimensional objects) travel on an adjacent lane in which the host vehicle is traveling, are traveling consecutively, and when the host vehicle is passed by the rear of an adjacent vehicle of the two consecutive vehicles, the first adjacent vehicle is no longer detected in the detection area and the driver can , thus, determining that an adjacent vehicle (three-dimensional object) is not present behind the host vehicle, without considering the fact that the second adjacent vehicle is present behind the host vehicle.
[004] The problem to be solved of the present invention is to provide a three-dimensional object detection unit that is capable of adequately detecting two adjacent vehicles when two adjacent vehicles are running consecutively Means Used to Solve the Above Problems
[005] The present invention solves the problem by enlarging the rear of the detection area with respect to the direction of advance of the vehicle when a three-dimensional object is detected in the detection area and the relative speed of movement of the three-dimensional object is at a predetermined value or bigger. Effect of the Invention
[006] According to the present invention, when two adjacent vehicles running in an adjacent lane consecutively, and the rear of the first adjacent vehicle (three-dimensional object) has been detected and the relative speed of movement of the first adjacent vehicle is by a predetermined value or greater, it is determined that the host vehicle has passed the first adjacent vehicle and the rear of the detection areas with respect to the direction of advance of the vehicle are enlarged, thereby the adjacent second vehicle following the first vehicle adjacent can be properly detected. DESCRIPTION OF DRAWINGS
[007] Figure 1 is a schematic structural diagram of a vehicle in which a three-dimensional object detection unit was mounted according to the first modality.
[008] Figure 2 is a plan view illustrating the state of travel of the vehicle in Figure 1.
[009] Figure 3 is a block view that illustrates the details of the computer according to the first modality.
[010] Figure 4 is a view to describe the overview of the processing of the alignment unit according to the first embodiment; Figure 4(a) is a plan view illustrating the state of motion of the vehicle, and Figure 4(b) is an image illustrating an alignment overview.
[011] Figure 5 is a schematic view illustrating the way in which the waveform difference is generated by the three-dimensional object detection unit according to the first embodiment.
[012] Figure 6 is a view illustrating the small areas divided by the three-dimensional object detection unit according to the first modality.
[013] Figure 7 is a view that illustrates an example of the histogram obtained by the three-dimensional object detection unit according to the first modality.
[014] Figure 8 is a view illustrating the weight used by the three-dimensional object detection unit according to the first embodiment.
[015] Figure 9 is a view that illustrates another example of the histogram obtained by the three-dimensional object detection unit according to the first modality.
[016] Figure 10 is a view to describe the method for accessing an adjacent vehicle present in an adjacent lane.
[017] Figure 11 is a view to describe the method for adjusting the detection areas by the detection area adjusting unit according to the first modality.
[018] Figure 12 is a view to describe the method to adjust the detection areas when the host vehicle is changing direction.
[019] Figure 13 is a flowchart that illustrates the method to detect an adjacent vehicle according to the first modality.
[020] Figure 24 is a flowchart illustrating the method to adjust the detection areas according to the first modality.
[021] Figure 15 is a view illustrating an example of the width of the detection areas in the direction of travel of the host vehicle adjusted by the detection area adjustment unit according to the first modality.
[022] Figure 16 is a block view illustrating the details of the computer according to the second mode.
[023] Figure 17 is a view illustrating the travel state of the vehicle, Figure 17(a) is a plan view illustrating the positional relationship between the sensing area and the like, and Figure 17(b) is a perspective view illustrating the positional relationship between the detection area and the like in real space.
[024] Figure 18 is a view to describe the operation of the luminance difference calculation unit according to the second embodiment; Figure 18(a) is a view illustrating the positional relationship between the attention line, the reference line, the attention point and the reference point in an aerial view image, and Figure 18(b) is a view illustrating the positional relationship between the line of attention, the line of reference, the point of attention, and the actual space of the reference point.
[025] Figure 19 is a view to describe the detailed operation of the luminance difference calculation unit according to the second embodiment; Figure 19(a) is a view illustrating the detection area in the aerial view image, and Figure 19(b) is a view illustrating the positional relationship between the attention line, the reference line, the point of attention and reference point in an aerial view image.
[026] Figure 20 is a view illustrating an example image to describe the edge detection operation.
[027] Figure 21 is a view illustrating the edge line and the luminance distribution in the edge line; Figure 21(a) is a view illustrating the luminance distribution when a three-dimensional object (adjacent vehicle) is present in the detection area, and Figure 21(b) is a view illustrating the luminance distribution when not a three-dimensional object is present in the detection area.
[028] Figure 22 is a flowchart illustrating the method to detect an adjacent vehicle according to the second modality.
[029] Figure 23 is a view to describe another example of the method to adjust the detection area when p host vehicle is changing direction. Preferred Modalities of the Invention <<Modality 1>>
[030] Figure 1 is a schematic structural diagram of a vehicle in which a three-dimensional object detection unit was mounted according to the first modality. An objective of the three-dimensional object detection device 1 according to the present embodiment is to detect another vehicle (may be referred to below as an "adjacent vehicle") present in an adjacent lane where contact is possible if a host vehicle V1 changes lanes. The three-dimensional object detection device 1 according to the present embodiment is provided with a camera 10, a speed sensor 20, a computer 30, a steering angle sensor 40, and a notification device 50, as illustrated in Figure 1.
[031] Camera 10 is fixed to host vehicle V1 such that the optical axis is at an angle θ downstream of the horizon at a location at a height h at the rear of host vehicle V1, as illustrated in Figure 1. from this position, camera 10 captures a predetermined area of the environment surrounding the host vehicle V1. The speed sensor 20 detects travel speed from the speed of the wheel detected, for example, by a wheel speed sensor to detect the rotational speed of a wheel. Computer 30 detects an adjacent vehicle present in an adjacent advanced lane of the host vehicle. Steering angle sensor 40 is an angle sensor attached close to the steering column or steering wheel and detects the rotational angle of the steering axle as the steering angle. The steering angle information detected by the steering angle sensor 40 is transmitted to the computer 30. The notification device 50 provides a warning to the driver that an adjacent vehicle is present when, as a result of the detection of an adjacent vehicle, it takes place. by computer 30, an adjacent vehicle is present at the rear of the host vehicle. The notification device 50 is not particularly limited, but examples have included a loudspeaker to transmit an audio warning to the driver, a monitor to display a warning message, and a warning light that provides illuminated warning within the dashboard. of instrument.
[032] Figure 2 is a plan view illustrating the travel state of the host vehicle V1 in Figure 1. As illustrated in the drawings, the camera 10 captures the rear side of the vehicle at a predetermined angle of view a. At this time, the angle viewed a of camera 10 is set to a viewing angle that allows the left and right lanes (adjacent lanes) to be captured beyond the lane that host vehicle V1 is traveling on.
[033] Figure 3 is a block view illustrating the details of the computer 30 in Figure 1. The camera 10, the speed sensor 20, the steering angle sensor 40, and the notification device 50 are also illustrated in the Figure 3 to distinctly indicate the connection relationships.
[034] As illustrated in Figure 3, the computer 30 is provided with a viewpoint conversion unit 31, an alignment unit 32, a three-dimensional object detection unit 33, and a detection area adjustment unit 34 The configuration of these units is described below.
[035] The captured image data of the predetermined area is obtained by the capture performed by the camera 10 is inputted into the viewpoint conversion unit 31, and the captured image data thus inputted is converted to bird's eye image data, which is an aerial view state. An aerial view state is a viewing state from a point of view of an imaginary camera that is looking down from above, for example, vertically downstream. Viewpoint conversion can be performed in the manner described, for example, in Japanese Patent Application Laid-back No. 2008-219063. The reason that captured image data is converted to aerial view image data is based on the principle that perpendicular edges unique to a three-dimensional object are converted to a straight line group that passes through a specific fixed point by converting to data from aerial view image, and using this principle allows a planar object and a three-dimensional object to be differentiated.
[036] The aerial view image data obtained by the viewpoint conversion performed by the viewpoint conversion unit 31 is sequentially inputted into the alignment unit 32, and the inputted positions of the aerial view imagery data at different times are aligned. Figure 4 is a view to describe the processing overview of the alignment unit 32, Figure 4(a) is a plan view illustrating the moving state of the host vehicle V1, and Figure 4(b) is an image which illustrates an alignment overview.
[037] As illustrated in Figure 4(a), the host vehicle V1 at the current time is positioned at P1, and the host vehicle V1 at a single previous time is positioned at P1’. It is assumed that an adjacent vehicle V2 is positioned to the rear lateral direction of host vehicle V' and is traveling parallel to host vehicle V1, and that adjacent vehicle V2 is currently positioned at P2, and adjacent vehicle V2 in a single previous moment is positioned at P2'. Furthermore, it is assumed that the host vehicle V1 has moved a distance d in a single moment. The phrase “a single past moment” can be a moment in the past by a time sent in advance (for example, a single control cycle) from the current moment, or it can be a moment in the past by an arbitrary time.
[038] In this state, an aerial view image PB1 at the current time is illustrated in Figure 4(b). The white lines drawn on the road surface are rectangular in the aerial view image PB1 and are relatively accurate in a planar view, but the adjacent vehicle V2 (position P2) is tipped over. The same is applied to the aerial view image PB1-1 at a single time before, the white lines drawn on the road surface are rectangular and are relatively accurate in a planar view, but the adjacent vehicle V2 (position P2’) is tipped over. As described earlier, the perpendicular edges of a three-dimensional object (edges that remain upright in the three-dimensional space of the road surface are also included in a strict sense from the perpendicular edge) appear as a straight line group along the tipping direction due to to the process for converting the aerial view image data, but since a planar image on the road surface does not include perpendicular edges, such tipping does not occur even when the viewpoint has been converted.
[039] The alignment unit 32 aligns the aerial view images PB1 and PB1-1, such as those described above, in terms of data. When this is done, the alignment unit 32 shifts the aerial view image PB1-1 at a single previous moment, and combines the position with the aerial view image PB1 at the current moment. The left side image in the center image in Figure 4(b) illustrates the state displaced by a moving distance d’. The amount shifted d' is the amount of motion in the aerial view image data that corresponds to the distance d of the present motion of the host vehicle V1 illustrated in Figure 4(a), and is decided based on a signal from the speed sensor 20 in the moment from a single previous moment to the current moment.
[040] After the alignment, the alignment unit 32 obtains the difference between the aerial view images PB1 and PB1-1, and generates the difference image data PDt. In the present embodiment, the alignment unit 32 takes the absolute value of the difference in pixel values of the aerial view images PB1 and PB1-1 to correspond to the variation in the lighting environment, and when the absolute value is equal to or greater than one Default threshold value th, the PDt difference image pixel values are set to “1”, and when the absolute value is less than a predetermined threshold th, the PDt difference image pixel values are set to “0 ”, which allows PDt difference image data such as those illustrated on the right side of Figure 4(b) to be generated.
[041] Returning to Figure 3, the three-dimensional object detection device 33 detects a three-dimensional object based on the PDt difference image data illustrated in Figure 4(b). In that case, the three-dimensional object detection device 33 calculates the moving distance of the three-dimensional object in the current space. The three-dimensional object detection device 33 first generates a difference waveform when the three-dimensional object is detected and the movement distance is to be calculated.
[042] Specifically, the three-dimensional object detection device 33 generates a difference waveform in the detection areas adjusted by the previously described detection area adjustment unit 34. The purpose of the three-dimensional object detection device 1 of the present example must calculate the movement distance to the adjacent vehicle with which there is a possibility of contact if host vehicle V1 changes lanes. Therefore, in the present example, the rectangular sensing areas A1, A2 are set to be the rear side of the host vehicle V1, as illustrated in Figure 2. Such sensing areas A1, A2 can be adjusted from a position relative to the host vehicle V1, or can be adjusted based on the position of the white lines. When adjusted based on the position of the white lines, the three-dimensional object detection device 1 can use, for example, known white line recognition techniques. The method for adjusting the detection areas as performed by the detection area adjusting unit 32 is described later.
[043] The three-dimensional object detection device 33 recognizes as ground lines L1, L2 the edges of the detection areas A1, A2 thus set, the side of the host vehicle V1 (side along the direction of travel), as illustrated in Figure 2. Generally, a ground line refers to a line in which a three-dimensional object is in contact with the ground, but in this modality a ground line is not a line in contact with the ground, but rather. This is adjusted in the manner described above. Even in such a case, the difference between the ground line according to the present embodiment and the normal ground line determined from the position of the adjacent vehicle V2 is not very large as determined by experience, and there is no problem in reality.
[044] Figure 5 is a schematic view illustrating the way in which the difference waveform is generated by the three-dimensional object detection unit 33. As illustrated in Figure 5, the three-dimensional object detection device 33 generates a shape difference waveform DW1 from the part corresponding to the detection areas A1, A2 in the difference image PDt (drawing on the right in Figure 4(b)) calculated by the alignment unit 32. In this case, the object detection device three-dimensional 33 generates a DWt difference waveform along the falling direction of the three-dimensional object by the viewpoint conversion. In the example illustrated in Figure 5, only detection area A1 will be described for convenience, but difference waveform DW1 is generated for detection area A2 as well using the same procedure.
[045] Specifically, first, the three-dimensional object detection device 33 defines a line La in the direction in which the three-dimensional object falls in the difference image data PDt. The three-dimensional object detection device 33 then counts the number of pixels of difference DP that indicate a predetermined difference in line La. In the present embodiment, DP difference pixels that indicate a predetermined difference have pixel values in the PDt difference image that are represented by "0" and "1", and pixels indicated by "1" are counted as DP difference pixels .
[046] The three-dimensional object detection device 33 counts the number of pixels of difference DP, and then determines the crossing point CP of the line La and the ground line L1. The three-dimensional object detection device 33 then correlates the CP crossing point and the count number, decides the position of the horizontal axis, i.e. the position on the axis in the vertical direction in the drawing on the right in Figure 5, based on at the position of the CP crossing point, decides the position of the vertical axis, that is, the position on the axis in the lateral direction in the drawing on the right in Figure 5, from the counting number, and marks the positions as the count number as the CP crossing point.
[047] Likewise, the three-dimensional object detection device 33 defines the lines Lb, Lc,...in the direction in which the three-dimensional object falls, counts the number of pixels of difference DP, decides the position of the horizontal axis based on the position of each CP crossing point, it decides the position of the vertical geometric axis from the count number (the number of pixels of DP difference), and marks the positions. The three-dimensional object detection device 33 repeats the above in sequence to form a frequency distribution and thereby generate a first difference waveform DW1 as illustrated in the drawing on the right in Figure 5.
[048] Here, the DP difference pixels in the PDt difference image data are pixels that have changed in the image at different times, in other words the locations that can be constructed to be where a three-dimensional object was present. Therefore, where a three-dimensional object was present, the number of pixels is counted along the direction in which the three-dimensional object tips to form a frequency distribution and thereby generate a DWt difference waveform. In particular, the number of pixels is counted along the direction in which the three-dimensional object tips, and a DWt difference waveform is therefore generated from the information about the height direction relative to the three-dimensional object.
[049] Therefore, the DWt difference waveform is a pixel distribution information mode that shows a predetermined luminance difference, and the "pixel distribution information" in the present mode can be aligned with the information indicating the distribution state of the “pixels that have the luminance difference at a predetermined threshold value or greater” as detected along the direction in which the three-dimensional object tips when the captured image is converted from the viewpoint to an aerial view image. In other words, in an aerial view image obtained by the viewpoint conversion unit 31, the three-dimensional object detection device 33 detects a three-dimensional object based on pixel distribution information having a luminance difference in a predetermined threshold value or greater along the direction in which the three-dimensional object tips when the captured image is converted to viewpoint to an aerial view image.
[050] The lines La and Lb in the direction in which the three-dimensional object falls have different distances that overlap the detection area A1, as illustrated in the drawing on the left in Figure 5. Therefore, the number of pixels of difference DP is greater in the line La than on the Lb line when the detection area A1 is assumed to be filled with the DP difference pixels. For this reason, the three-dimensional object detection device 33 performs normalization based on the distance that the lines La, Lb in the direction in which the three-dimensional object falls and the detection area A1 overlap when the position of the vertical axis is decided from of the count number of the DP difference pixels. In a specific example, there are six pixels of DP difference in line La and there are five pixels of DP difference in line Lb in the drawing on the left in Figure 5. Therefore, when the vertical axis position is decided from the count number in Figure 5, the three-dimensional object detection device 33 divides the count number by the overlap distance or performs normalization in another way. The values of the DWt difference waveform that correspond to the lines La, Lb in the direction in which the three-dimensional object falls are thus substantially the same.
[051] After the generation of the DWt difference waveform the three-dimensional object detection device 33 detects an adjacent vehicle present in the adjacent lane in the generated DWt difference waveform. The three-dimensional object detection device calculates the movement distance by comparing the DWt difference waveform at the present moment and the DWt-1 difference waveform at a single previous moment. In other words, the three-dimensional object detection device 33 calculates the movement distance from the time change of the DWt difference waveform and the DWt-1 difference waveform.
[052] Specifically, the three-dimensional object detection device 33 divides the difference waveform DWt into a plurality of small areas DWt1 to DWtn (where n is an arbitrary integer of 2 or more), as illustrated in Figure g. Figure 6 is a view illustrating the small areas DWt1 to DWtn divided by the three-dimensional object detection unit 33. The small areas DWt1 to DWtn are divided so that they overlap each other, as illustrated, for example, in Figure 6. For example , the small area DWt1 and the small area DWt2 overlap, and the small area DWt2 and the small area DWt3 overlap.
[053] Then, the three-dimensional object detection device 33 determines the displacement amount (the amount of movement in the direction of the horizontal axis (vertical direction in Figure 6) of the difference waveform) for each of the small DWt1 areas. to DWtn. Here, the amount of displacement is determined from the difference (distance in the direction of the horizontal axis) between the DWt-1 difference waveform at a single moment before and the DWt difference waveform at the current moment. In this case, the three-dimensional object detection device 33 moves the difference waveform DWt-1 at a single previous time in the direction of the horizontal axis for each of the small areas DWt1 to DWtn, and then evaluates the position (the position in the direction of the horizontal axis) in which the error of the DWt difference waveform at the current time is at a minimum, and determines as the displacement amount the amount of movement in the direction of the horizontal axis at the position at which the error starts. from the original position of the DWt-1 difference waveform is at a minimum. The three-dimensional object detection device 33 then counts the amount of displacement determined for each of the small areas DWt1 to DWtn and forms a histogram.
[054] Figure 7 is a view that illustrates an example of the histogram obtained by the three-dimensional object detection unit 33. As illustrated in Figure 7, a certain amount of variability occurs in the amount of displacement, which is the movement distance at which the error between the small areas DWt1 to DWtn and the difference waveform DWt-1 at a single previous time is at a minimum. Therefore, the three-dimensional object detection device 33 forms the displacement quantities that include the variability for a histogram and calculates the movement distance from the histogram. At that point, the three-dimensional object detection device 33 calculates the moving distance of the three-dimensional object from the maximum value in the histogram. In other words, in the example illustrated in Figure 7, the three-dimensional object detection device 33 calculates the amount of displacement that indicates the maximum value of the histogram as the movement distance z*. In this way, in the present modality, a more accurate movement distance can be calculated from the maximum value, even when there is variability in the amount of displacement. The z* movement distance is the relative movement distance of the three-dimensional object with respect to the host vehicle. Therefore, the three-dimensional object detection device 33 calculates the absolute movement distance from the movement distance z* thus obtained and the speed sensor 20 when the absolute movement distance is to be calculated.
[055] Thus, in the present modality, the moving distance of the three-dimensional object is calculated from the amount of displacement of the DWt difference waveform when the error in the DWt difference waveform generated at different times is at a minimum , and this allows the movement distance to be calculated from the amount of displacement, which is information about a dimension in a waveform, and allows the computation cost to be kept low when the movement distance is calculated. Furthermore, dividing the DWt difference waveform generated at different times into a plurality of small areas DWt1 the DWtn allows a plurality of waveforms to be obtained representing the locations of the three-dimensional object, thereby allowing the amount of displacement in each location of the three-dimensional object is determined and allowing the movement distance to be determined from a plurality of displacement quantities. Therefore, the accuracy of calculating the movement distance can be improved. In the present embodiment, the moving distance of the three-dimensional object is calculated from the change in the DWt difference waveform, which includes height direction information. Consequently, unlike the focus being only on the movement of a single point, the location detection before the shift with time and the location detection after the shift in time are specified with the height direction information included and therefore , actually ends up being the same location, the movement distance is calculated from the change in time at the same location; and accuracy for calculating movement distance can be improved.
[056] When a histogram is to be formed, the three-dimensional object detection device 33 can give a weighing to the plurality of small areas DWt1 to DWtn and count the displacement amounts determined for each of the small areas DWt1 to DWtn according to the weighing to form a histogram. Figure 8 is a view illustrating the weighing used by the three-dimensional object detection unit 33.
[057] As illustrated in Figure 8, a small DWm area (where m is an integer 1 or greater and n -1 or smaller) is flat. In other words, in the small DWm area, there is little difference between the maximum and minimum pixel number count values that indicates a predetermined difference. The three-dimensional object detection device 33 reduces the weighing of this type of small area DWm. This is because a small flat DWm area does not have a characteristic and there is a high possibility that an error will be magnified when the displacement amount is calculated.
[058] On the other hand, the small area DWm+k (where k is an integer n - m or smaller) has a lot of ripple. In other words, in the small DWm area, there is considerable difference between the maximum and minimum values of the pixel number count which indicates a predetermined difference. The three-dimensional object detection device 33 increases the weight of this type of small area DWm. This is due to the fact that a small DWm+k area with abundant modulation is characteristic and there is a high possibility that the amount of displacement is accurately calculated. Weighing the small areas in this way makes it possible to increase the accuracy of calculating the movement distance.
[059] The difference waveform DWt is divided into a plurality of small areas DWt1 to DWtn in the present modality to increase the accuracy for calculating the movement distance, but division into small areas DWt1 to DWtn is not required when it is not Accuracy needed to calculate movement distance. In this case, the three-dimensional object detection device 33 calculates the movement distance from the displacement amount of the DWt difference waveform when the error between the DWt difference waveform and the DWt-1 difference waveform is at a minimum. In other words, the method for determining the amount of displacement between the DWt-1 difference waveform at a single previous time and the DWt difference waveform at the current time is not limited to the details described above.
[060] The three-dimensional object detection device 33 in the present embodiment determines the movement speed of the host vehicle V1 (camera 10) and determines the amount of displacement for a fixed object from the determined movement speed. After determining the displacement amount of the fixed object, the three-dimensional object detection device 33 ignores the displacement amount corresponding to the fixed object within the maximum value of the histogram, and calculates the moving distance of the adjacent vehicle.
[061] Figure 9 is a view illustrating another example of the histogram obtained by the three-dimensional object detection unit 33. When a fixed object other than the adjacent vehicle is present within the view angle of camera 10, two maximum values r1, r2 appear in the resulting histogram. In this case, one of the values r1, r2 is the amount of displacement of the fixed object. Consequently, the three-dimensional object detection device 33 determines the displacement amount for the fixed object from the movement speed, ignores the maximum value that corresponds to the displacement amount, and calculates the moving distance of the three-dimensional object using the maximum value. remaining. Therefore, it is possible to prevent a situation in which the accuracy for calculating the moving distance of the three-dimensional object is reduced by the fixed object.
[062] Even when the displacement amount corresponding to the fixed object is ignored, there may be a plurality of adjacent vehicles present within the view angle of the camera 10 when there is a plurality of maximum values. However, a plurality of three-dimensional objects present within detection areas A1, A2 occur very rarely. Therefore, the three-dimensional object detection device 33 stops calculating the movement distance. In the present embodiment, it is therefore possible to prevent a situation in which an erroneous movement distance is calculated such as when there is a plurality of maximum values.
[063] The three-dimensional object detection device 33 further differentiates the time from the calculated relative motion distance of the three-dimensional object to thereby calculate the relative motion speed of the three-dimensional object with respect to the host vehicle and also adds to host vehicle speed detected by the speed sensor 20 for the calculated relative motion speed of the three-dimensional object to thereby calculate the calculated relative motion speed of the three-dimensional object.
[064] After the DWt difference waveform has been generated, the three-dimensional object detection device 33 detects an adjacent vehicle present in the adjacent lane based on the generated DWt difference waveform. Here, Figure 10 is seen to describe the method for evaluating another vehicle present in an adjacent lane, and illustrates an example of the DWt difference waveform and the threshold value a for detecting an adjacent vehicle present in an adjacent lane. For example, the three-dimensional object detection device 33 determines whether the generated DWt difference waveform peak is at a threshold a or greater, and when the generated DWt difference waveform peak is at a threshold a or greater, the detected three-dimensional object is evaluated to be an adjacent vehicle present in the adjacent lane, and when the peak of the DWt difference waveform is not at a threshold value of a or greater, the three-dimensional object detected by the object detection unit three-dimensional object 33 is not assessed to be an adjacent vehicle present in the adjacent lane, as illustrated in Figure 10. When the detected three-dimensional object has been assessed to be an adjacent vehicle, the three-dimensional object detection device 33 transmits a notification signal to the device. notification 50, and thereby notifies the driver that an adjacent vehicle is present at the rear of the host vehicle.
[065] Returning to Figure 3, the detection area adjustment unit 34 adjusts the detection areas A1, A2 to detect a three-dimensional object. Specifically, the detection area adjustment unit 34 determines whether the host vehicle has passed the adjacent vehicle based on the detection results of the three-dimensional object detection device 33, and when it has been determined that the host vehicle has passed the adjacent vehicle, the areas detection values A1, A2 are extended with respect to the vehicle's forward direction.
[066] Figure 11 is a view to describe the method for adjusting the detection areas by the detection area adjusting unit 34. In Figure 11, two adjacent vehicles V2, V2' are running on the adjacent lane consecutively, and illustrates a situation in which the first adjacent advanced vehicle V2 was detected. In the example illustrated in Figure 11, only detection area A1 is illustrated and described, but detection area A2 is adjusted in the same way.
[067] The detection area adjustment unit 34 adjusts the width (extent) of the detection area A1 in the forward direction of the host vehicle in advance to be, for example, 7m, and the detection of three-dimensional objects is performed in this area detection A1. As illustrated in Figure 11(A), when the first adjacent advanced vehicle V2 has been detected by the three-dimensional object detection unit 33, the detection area adjustment unit 34 determines whether the relative movement speed of the adjacent vehicle V2 with respect to the host vehicle V1 is at a predetermined or greater relative speed. The predetermined speed is not particularly limited, but can be adjusted to be, for example, 10 km/h (i.e. the host vehicle speed is 10 km/h or greater than the speed of the adjacent vehicle V2) considering the error of detecting the relative movement speed of the adjacent vehicle V2 with respect to the host vehicle V1. The relative movement speed of the adjacent vehicle V2 with respect to the host vehicle V1 can be acquired from the three-dimensional object detection device 33, or can be calculated by the detection area adjusting unit 34.
[068] If adjacent vehicle V2 with respect to host vehicle V1 is at a predetermined speed or greater, detection area setting unit 34 determines that the first adjacent vehicle V2 is passing host vehicle V1, and detection area A1 is zoomed back in relation to the direction of advance of vehicle V1, as illustrated in Figure 11(B). For example, when the width of the detection area A1 in the forward direction of the host vehicle V1 pre-set is 7 m, the detection area adjustment unit 34 widens the detection areas A1, A2 backwards by 2 m with respect to the direction. of vehicle advance when it has been determined that the host vehicle V1 has passed the adjacent vehicle V2, and the width of the detection area A1 in the direction of advance of the host vehicle can be adjusted to 9 m in total. In Figure 11(B), the variation of the detection area A1 enlarged backwards with respect to the direction of advance of the vehicle is illustrated in grey.
[069] Therefore, when it has been determined that the first adjacent vehicle V2 is passing the host vehicle V1, extending the detection area A1 backwards with respect to the direction of advance of vehicle V1 allows the second adjacent vehicle V2' which follows the first adjacent leading vehicle V2 is detected in detection area A1, as illustrated in Figure 11(B).
[070] On the other hand, the second adjacent vehicle V2' cannot be detected in the detection area A1 when the first adjacent vehicle V2 has passed the host vehicle V1 in the case that the detection area A1 is not zoomed back with respect forward direction of vehicle V1 in a conventional manner, even when it has been determined that adjacent vehicle V2 is passing host vehicle V1. For this reason, it is determined that an adjacent vehicle is not present behind the adjacent vehicle and notification is not performed by the notification device 50 by the three-dimensional object detection unit 33, even if an adjacent vehicle V2' is present behind the host vehicle. Therefore, in a situation where the first adjacent advanced vehicle V2 is passing the host vehicle V1 and notification is not made that an adjacent vehicle is present behind the host vehicle, there are cases where it is possible for the driver to determine that a vehicle adjacent vehicle is not present in the adjacent lane behind the host vehicle, and the driver will change the lane of the host vehicle V1, even if a second adjacent vehicle V2' is present behind the host vehicle V1, and the host vehicle V1 and the second adjacent vehicle V2' approach each other. Conversely, in the present embodiment, when a first adjacent advanced vehicle V2 has been determined to be passing host vehicle 1, the widening of the detection area A1 at the rear with respect to the direction of advance of the vehicle V1 allows a second adjacent vehicle V2' in detection area A1 is detected, and it is possible to notify the driver of the presence of a second adjacent vehicle V2.
[071] In the present embodiment, the detection area adjusting unit 34 is capable of varying the amount by which the detection areas A1, A2 are to be enlarged backwards with respect to the direction of advance of the vehicle according to the speed of relative movement from adjacent vehicle V2 to host vehicle V1. For example, the detection area adjustment unit 34 can predict that the greater the relative movement speed of the first adjacent vehicle V2 with respect to the host vehicle V1, the faster the relative movement speed of the second adjacent vehicle V2', and that the second adjacent vehicle will soon catch up with the host vehicle and increase the amount by which detection areas A1, A2 must be widened in the backward direction. Alternatively, it can be determined that the greater the relative movement speed of the first adjacent vehicle V2 with respect to the host vehicle V1, the shorter the time that the adjacent vehicle will be present behind the host vehicle, and the amount by which the detection areas A1 , A2 should be enlarged in the backward direction can be reduced. Furthermore, it is also possible to use a configuration in which the consideration given to the speed of the host vehicle where the second adjacent vehicle V2' is not foreseen to soon reach the host vehicle when the speed of the host vehicle is sufficiently high and the distance between the first adjacent vehicle V2 and the second adjacent vehicle V2' is large, and the amount by which the detection areas A1, A2 must be widened is not increased, even when the relative movement speed of the first adjacent vehicle V2 is high with respect to host vehicle V1.
[072] In addition, the detection area adjustment unit 34 acquires steering angle information from the steering angle sensor 40 and determines whether the host vehicle is changing direction based on the acquired steering angle information. When it has been determined that the host vehicle is changing direction, the detection area adjustment unit 34 then calculates the turning radius of the host vehicle based on the steering angle information, and modifies the amount by which the detection areas should be magnified in the rear direction according to the calculated turn radius when the host vehicle has been passed by an adjacent vehicle. Specifically, the detection area adjustment unit 34 has maps or computational formulas that indicate the correlation between the turn radius and the detection areas A1, A2 which are modified according to the turn radius, and decides the amount by which detection areas should be zoomed back when the host vehicle has been passed by an adjacent vehicle.
[073] Here, Figure 12 is a view to describe the method for adjusting the detection area when the host vehicle is changing direction. Figure 12(A) is a view illustrating an example of the detection area set when the host vehicle is changing direction in the present embodiment. Figure 12(B) is a view illustrating an example of the detection area when the host vehicle is running and the method for adjusting the detection area in the present embodiment. In a situation where the host vehicle V1 has been passed by the first adjacent advanced vehicle V2, the sensing area adjusting unit 34 modifies the amount by which the sensing area A2 is enlarged backward so that the sensing area does not enter the lane on which the host vehicle is traveling when it has been determined that the host vehicle is changing direction, as illustrated in Figure 12(A). Specifically, the detection area adjustment unit 34 reduces the amount by which the detection area A2 adjusted within the turn is to be enlarged backwards, commensurate with the magnitude of the turn radius of the host vehicle V1 such that the area detection does not enter the lane on which the host vehicle is traveling.
[074] On the other hand, when the detection area V2 is uniformly enlarged by a fixed amount when it is determined that the host vehicle V1 is changing direction, a rear vehicle V3 is detected that travels in the lane on which the host vehicle V1 is traveling in detection area A2, as illustrated in Figure 12(B), and there are cases in which the rear vehicle V3 is hereby erroneously detected as an adjacent vehicle traveling in an adjacent lane. In response to this, in the present modality, the amount by which the rear with respect to the direction of advancement of vehicle V1 is commensurately reduced to a smaller turning radius of host vehicle V1, as illustrated in Figure 12(A ), thereby making it possible to effectively prevent a situation in which the rear vehicle V3 is detected in detection area A2. As a result, an adjacent vehicle traveling in an adjacent lane can be properly detected.
[075] Next, the process for detecting an adjacent vehicle according to the present embodiment is described. Figure 13 is a flowchart illustrating the method for detecting an adjacent vehicle of the first embodiment. First, the detection areas A1, A2 for detecting an adjacent vehicle are adjusted by the detection area adjustment unit 34 (step S101), as illustrated in Figure 13. In step S101, the detection areas A1, A2 that are adjusted are those adjusted in a detection area adjustment process described later (see Figure 14).
[076] The data of an image captured by the computer 30 from the camera 10 (step S102), and the data of an aerial view image PBt are generated (step S103) by the viewpoint conversion unit 31 based on the captured image data thus acquired.
[077] The alignment unit 32 then aligns the aerial view image data PBt and the aerial view image data PBt-1 at a single prior time, and generates the difference image data PDt (step S104 ). The three-dimensional object detection device 33 then counts the number of different pixels DP that have a pixel value of "1" to thereby generate a difference waveform DWt from the difference image data PDt (step S105) .
[078] The three-dimensional object detection device 33 then determines whether a peak in the DWt difference waveform is at a predetermined threshold value of a or greater (step S106). When the peak of the DWt difference waveform is not at a predetermined threshold value of a or greater, that is, when it is not essentially different, it is believed that a three-dimensional object is not present in the captured image. Therefore, when it has been determined that the peak of the DWt difference waveform is not at threshold a or greater (step S106 = No), the three-dimensional object detection device 33 determines that there is no other vehicle present (step S116) , and then returns to step S101 and repeats the above-described process illustrated in Figure 13.
[079] On the other hand, when the peak in the DWt difference waveform is determined as a predetermined threshold value of or greater (step S106 = Yes), it is determined by the three-dimensional object detection unit 33 that a three-dimensional object is present. on the adjacent lane and proceeds to step S107. The difference waveform DWt is divided into a plurality of small areas DWt1 to DWtn by the three-dimensional object detection unit 33. The three-dimensional object detection device 33 then gives weight to each of the small areas DWt1 to DWtn (step S108 ), calculates the amount of displacement for each of the small areas DWt1 to DWtn (step S109), and generates a histogram considering the weights (step S110).
[080] The three-dimensional object detection device 33 calculates the relative movement distance, which is the movement distance of the adjacent vehicle with respect to the host vehicle, based on the histogram, the time differentiates the calculated relative movement distance for, by thereby calculating the relative motion speed of the three-dimensional object with respect to the host vehicle (step S111), adds the host vehicle speed detected by the speed sensor 20 to the calculated relative motion speed, and calculates the absolute motion speed of the object three-dimensional with respect to the host vehicle (step S112).
[081] The three-dimensional object detection device 33 then determines whether the relative motion speed of the three-dimensional object is 10 km/h or greater and whether the relative motion speed of the three-dimensional object relative to the host vehicle is +60 km or smaller (step S113). When both conditions are met (step S113 = Yes), the three-dimensional object detection device 33 determines that the detected three-dimensional object is an adjacent vehicle present in the adjacent lane and that an adjacent vehicle is present in the adjacent lane (step S114). In the subsequent step S115, the driver is thereby notified by the notification device 40 that an adjacent vehicle is present behind the host vehicle. The process then returns to step S101, and repeats the process illustrated in Figure 13. On the other hand, when any of the conditions is not met (step S113 = No), the three-dimensional object detection device 33 determines that an adjacent vehicle it is not present in the adjacent lane (step S116). The process returns to step S101, and repeats the process illustrated in Figure 13.
[082] In the present embodiment, the detection areas A1, A2 are the rear lateral directions of the host vehicle, and the focus is placed on whether the host vehicle can possibly contact an adjacent vehicle if a lane change is made. Therefore, the process from step S113 is implemented. In other words, assuming that the system in the present mode is activated in express mode, when the speed of an adjacent vehicle is less than 10 km/h, it will rarely be a problem even if an adjacent vehicle is present because the adjacent vehicle would be positioned far behind the host vehicle when a lane change is made. Similarly, when the relative movement speed of an adjacent vehicle exceeds +60 km/h with respect to the host vehicle (ie when the adjacent vehicle is moving at a speed 60 km/h greater than the speed of the host vehicle ), will rarely be a problem because the adjacent vehicle will have moved in front of the host vehicle when a lane change is made. For this reason, it can be said that step S113 determines that adjacent vehicles become a lane change that becomes a problem.
[083] In step S113, it is determined whether the absolute movement speed of the adjacent vehicle is 10 km/h or greater, and whether the relative movement speed of the adjacent vehicle with respect to the host vehicle is +60 km/h, by means of of this obtaining the following effect. For example, a possible case is that the absolute movement speed of a fixed object is detected as several kilometers per hour depending on camera 10 fixation error. Therefore, determining whether the speed is 10 km/h or greater makes it possible to reduce the possibility that the fixed object will be determined to be an adjacent vehicle. Furthermore, it is possible that the relative speed of an adjacent vehicle with respect to the host vehicle is detected as greater than +60 km/h due to noise. Therefore, determining whether the relative speed is +60 km/h or less makes it possible to reduce the possibility of misdetection due to noise.
[084] Therefore, it is possible to determine that the relative movement speed of the adjacent vehicle is not negative or is not 0 km/h instead of processing step S113.
[085] Next, the detection area adjustment process according to the first modality will be described. Figure 14 is a flowchart illustrating the process of adjusting the detection area according to the first modality. The described detection area adjustment process is performed in parallel to the adjacent vehicle detection process illustrated in Figure 13, and the detection areas A1, A2 adjusted by the detection area adjustment process are adjusted in the detection process of the adjacent vehicle illustrated in Figure 13.
[086] First, in step S201, the detection area adjustment unit 34 determines whether an adjacent vehicle is being detected. Specifically, when the detection area adjusting unit 34 has determined that an adjacent vehicle is present in the detection areas A1, A2 in the adjacent vehicle detection process illustrated in Figure 13, it is determined that an adjacent vehicle is being detected, and the process proceeds to step S202, when the detection area adjusting unit has determined that an adjacent vehicle is not present in the detection areas A1, A2, it is determined that an adjacent vehicle is not being detected, and the process remains in the step S201 until an adjacent vehicle is detected.
[087] In step S202, the detection area adjustment unit 34 determines whether an adjacent vehicle has passed the host vehicle. Specifically, the detection area adjusting unit 23 acquires the relative movement speed of the adjacent vehicle with respect to the host vehicle from the three-dimensional object detection device 33, and determines whether the relative movement speed of the adjacent vehicle is a speed. predetermined (eg 10 km/h) or greater based on the acquired relative movement speed of the adjacent vehicle. When the relative movement speed of the adjacent vehicle with respect to the adjacent vehicle is a predetermined speed or greater, it is determined that the adjacent vehicle is passing the host vehicle, and the process proceeds to step S203. On the other hand, when the relative movement speed of the adjacent vehicle with respect to the adjacent vehicle is less than the predetermined speed, it is determined that the adjacent vehicle is not passing the host vehicle, and the process returns to step S201.
[088] In step S203, the detection area adjustment unit 34 determines whether the host vehicle is changing direction. The method of assessing whether the host vehicle is changing direction is not particularly limited, but in the present embodiment, the following method is used to assess whether the host vehicle is changing direction.
[089] In other words, the detection area adjustment unit 34 first predicts whether the host vehicle V1 is in a turn state after a certain period of time (may also be referred to below as turn state prediction). Specifically, the detection area adjustment unit 34 refers to a captured image acquired from the camera 10, detects lanes (eg white lines) on the road surface, and calculates the lane curvature as a parameter illustrating the shape. of the road. The detection area adjustment unit 34 then predicts the shape of the road in front of the host vehicle, i.e. the turning radius of the host vehicle until a period of time has elapsed, based on the computerized lane curvature and speed. obtained from the speed sensor 20.
[090] In addition, the detection area adjustment unit 34 calculates the current turning radius of the host vehicle V1 according to Formula 1 below on the speed of the host vehicle obtained from the speed sensor 20 and the steering angle obtained from from steering angle sensor 40. [Formula 1]

[091] In formula 1, p is the turning radius, K is the stability factor, V is the host vehicle speed, L is the wheel base, n is the steering gear ratio, and θ is the angle steering.
[092] The detection area adjustment unit 34 assesses that the host vehicle V1 is changing direction when the predicted turn radius in the above-described turn state prediction and the current turn radius obtained based on formula 1 are in a predetermined threshold value or greater. When it has been determined that the host vehicle is changing direction, the process proceeds to step S211. On the other hand, when it has been determined that the host vehicle is not changing direction, the process proceeds to step S204.
[093] In steps S204 and S210, processing is performed to magnify the detection areas backwards with respect to the direction of advance of the vehicle based on the relative movement speed of the adjacent vehicle with respect to the host vehicle and the vehicle speed information host.
[094] Specifically, the detection area adjusting unit 34 enlarges and adjusts (step S206) the detection area backwards by L(m) from a pre-adjusted range when the relative movement speed of the adjacent vehicle with respect to the host vehicle is a first speed or greater (step S204 = Yes), and the host vehicle speed is a second speed or greater (step S05 = Yes). On the other hand, the detection area adjustment unit 34 magnifies and adjusts (step S207) the detection areas backwards by L + o(m) of the range adjusted in advance when the relative movement speed of the adjacent vehicle with respect to host vehicle is first speed or higher (step S204 = Yes), and host vehicle speed is less than second speed (step S205 = No). The first speed is not particularly limited, and it is possible to use a speed that allows the determination that the second adjacent vehicle will soon catch up with the host vehicle when, for example, a second adjacent vehicle is traveling at first speed. Furthermore, second speed is not particularly limited, and it is possible to use a speed that allows the determination that there is congestion when, for example, the host vehicle is traveling at second speed.
[095] In this way, when the relative movement speed of the adjacent vehicle with respect to the adjacent vehicle is first speed or greater, and the host vehicle's speed is less than the second speed, it is determined that a second adjacent vehicle will soon reach the host vehicle and the amount by which the detection areas are magnified is set to a quantity (eg L + o(m) that is greater than a quantity decided in advance (eg L(m)) (step S207). The second adjacent vehicle that will soon pass the host vehicle can be properly detected. Furthermore, when the relative movement speed of the adjacent vehicle with respect to the host vehicle is the first speed or greater, the speed of the host vehicle is the second speed or greater, the speed of the first adjacent vehicle V2 is predicted to be considerably high because the speed of the host vehicle is sufficiently high, and it is determined that the distance between the first adjacent vehicle V2 and the second adjacent vehicle is increasing. Therefore, compared to when the speed of the host vehicle is less than a predetermined second speed, it is determined that the second adjacent vehicle will not soon reach the host vehicle and the amount by which the detection areas A1, A2 are magnified is adjusted for a quantity (eg L(m)) decided in advance (step S208).
[096] The detection area adjustment unit 34 magnifies and adjusts (step S209) the detection areas backwards by L -β(m) from a pre-adjusted range when the relative movement speed of the adjacent vehicle with respect the host vehicle is less than the first speed (step S204 = No), and the host vehicle speed is a second speed or greater (step S208 = Yes). On the other hand, the detection area adjusting unit 34 magnifies and adjusts (step S210) the detection areas backwards by L(m) from a range adjusted in advance when the relative movement speed of the adjacent vehicle with respect to the host vehicle is less than first speed (step S204 = No) and host vehicle speed is less than second speed (step S208 = No).
[097] In this way, the detection area adjustment unit 34 can determine that there is a high possibility that the distance between the first and second consecutive adjacent vehicles is narrow and that two adjacent vehicles are present immediately behind the host vehicle, and the amount by which the detection areas A1, A2 are to be enlarged is set to an amount decided in advance (eg L(m) (step S210) when the relative movement speed of the adjacent vehicle with respect to the host vehicle is less than that the first speed and the speed of the host vehicle is less than a second speed, for example, in congested conditions. On the other hand, when the speed of the host vehicle is at a predetermined second speed or greater, the adjacent first vehicle also need to travel at a high speed, so it is determined that the distance between the first adjacent vehicle and the second vehicle. Consecutive adjacent ulo is constant and the amount by which the detection areas are to be zoomed back is set to an amount that is less than the amount set beforehand (eg, L - β(m)) (step S209). In this way, deciding the amount by which the detection areas should be zoomed based on the relative movement speed of the adjacent vehicle with respect to the host vehicle and the host vehicle speed allows the detection areas to be adjusted to an appropriate range to match the travel conditions of the host vehicle.
[098] Further, when it has been determined that the host vehicle is changing direction in step S203, the detection area adjustment unit 34 calculates the current turning radius of the host vehicle in step S211. The method for calculating the host vehicle's current turn radius is not particularly limited, and, in the present embodiment, the host vehicle's current turn radius can be calculated in the manner described below.
[099] In other words, the detection area adjustment unit 34 decides the current turn radius based on the turn radius calculated in step S203. Specifically. The detection area adjustment unit 34 refers to the moment information and predicts the current turn radius based on the turn radius that existed until the lapse of a predetermined period of time as predicted in the predicted turn state prediction of the step S203. The detection area adjustment unit 34 compares the predicted current turning radius with the calculated turn radius in formula 1 noted above, and calculates the probability (ie degree of plausibility) with respect to the predicted current turning radius. The detection area adjustment unit 34 then decides the turn radius after a predetermined period of time as predicted in the turn state prediction to be the last turn radius when the probability is at a predetermined evaluation value or greater. , and, conversely, decides the turn radius calculations using formula 1 noted above as the last turn radius when the probability is less than the predetermined evaluation value.
[0100] In step S212, detection area adjusting unit 34 decides the amount by which the detection areas are to be zoomed back based on the last turning radius specified in step S211, and zooms in and adjusts the detection areas A1, A2 by the amount decided greater than the variation adjusted in advance.
[0101] Specifically, the detection area adjustment unit 34 reduces the amount by which the detection areas must be zoomed back measurably to a smaller turning radius so that the detection areas do not enter the track on which the host vehicle is traveling as illustrated in Figure 13(a). The detection area adjustment unit 34 has maps or computational formulas that indicate the correlation between the turn radius and the detection areas A1, A2 which are modified according to the turn radius, and adjusts the amount by which detection areas A1, A2 are modified using maps or computational formulas.
[0102] In step S213, the detection area adjustment unit 34 determines whether an adjacent vehicle is still being detected. For example, when it is determined that the first adjacent vehicle V2 is still being detected in a detection area, and the process remains at step S213, as illustrated in Figure 11(A), and, conversely, when the first adjacent vehicle is no longer detected, as illustrated in Figure 11(B), the process proceeds to step S214. When a second adjacent vehicle has been detected before the first adjacent vehicle is no longer detected, it is determined whether the second adjacent vehicle is still being detected. In this way, it is possible to properly detect a third adjacent vehicle following the second adjacent vehicle.
[0103] In step S214, the detection area adjusting unit 34 evaluates whether a predetermined period of time (eg two seconds) has elapsed after an adjacent vehicle is no longer detected in the detection areas. When a predetermined period of time has elapsed, the process remains in step S214 until a predetermined period of time has elapsed, and when a predetermined period of time has elapsed, the process proceeds to step S215. The sensing area adjustment unit 34 gradually narrows the sensing areas forward with respect to the vehicle's forward direction at a speed lower than the speed at which the sensing areas are enlarged and finally returns the sensing areas to their original sizes before being enlarged.
[0104] Here, Figure 15 is a graph that illustrates an example of the width of the detection areas in the forward direction of the host vehicle set in the present mode. For example, in the situational example illustrated in Figure 15, when the first adjacent vehicle V2 is detected in detection areas A1, A2 at time t1 (step S201 = Yes), as illustrated in Figure 11(A), and the first adjacent vehicle V2 was evaluated as passing the host vehicle V1 (step S202 = Yes), the width of the detection areas in the forward direction increases backwards from w1 and w2 (w2 > w1), as illustrated in Figure 11(B) ( steps S206, S207, S209, S210).
[0105] The detection area adjustment unit 34 then determines whether the first adjacent vehicle V2 is being detected after the detection areas A1, A2 have been enlarged at time t1 (step S213). In the example illustrated in Figure 15, the first adjacent advance vehicle V2 is no longer detected at time t2, and the detection areas are gradually narrowed forward from time t2 + n in which a predetermined time n has elapsed from time t2 ( step S214). Still, in the situational example illustrated in Figure 15, the width in the forward direction of the detection areas is finally returned, at time t3, to w1 before the detection areas A1, A2 are enlarged.
[0106] In this way, taking the extended detection areas back for a predetermined period of time after the first adjacent advanced vehicle V2 is no longer detected, allows a second adjacent vehicle V2' approaching the host vehicle to be properly detected even when the first adjacent vehicle V2 and the second adjacent vehicle V2' are spaced apart. Furthermore, when a predetermined period of time has elapsed after the first adjacent advanced vehicle V2 is no longer detected, gradually narrowing the detection areas A1, A2 backwards allows the second adjacent vehicle V21 to approach the host vehicle to be detected with greater reliability compared to when detection areas A1, A2 are narrowed in a single step.
[0107] The aforementioned predetermined period of time n can be modified according to the relative movement speed of the adjacent vehicle V2 with respect to the host vehicle V1. For example, the detection area adjustment unit 34 can be configured to predict that the movement speed of the second adjacent vehicle V2' is commensurately greater at a greater relative movement speed of the first adjacent vehicle V2 with respect to the host vehicle V1 and that the second adjacent vehicle will soon reach the host vehicle, and create the predetermined time n. Alternatively, it is possible to use the configuration in which the time of the adjacent vehicle remaining behind the host vehicle is predicted to be measurably shorter at a greater relative movement speed of the adjacent vehicle V2 relative to the host vehicle V1, and to reduce the predetermined time n. Therefore, it is possible to use a configuration in which the speed of the host vehicle is considered and it is predicted that the second adjacent vehicle V2' will not soon reach the host vehicle V1 when the speed of the host vehicle is sufficiently high and the distance between the first vehicle adjacent V2 and the second adjacent vehicle V2' is large, and the predetermined time n does not increase, even when the relative movement speed of the adjacent first vehicle V2 with respect to the host vehicle V1 is high. In this way, setting the predetermined time n according to the travel state of the host vehicle allows the second adjacent vehicle V2' to be properly detected.
[0108] As detected above, in the present embodiment, a PD1 difference image is generated based on the difference between aerial view images obtained at different times, the number of pixels indicating a predetermined difference in the PD1 difference image is counted for form a frequency distribution which thereby generates a difference waveform, and an adjacent vehicle present in the adjacent track is detected based on the generated difference waveform. Furthermore, in the present embodiment, it is determined whether the adjacent vehicle has passed the host vehicle, and then it has been determined that the adjacent vehicle has passed the host vehicle, the detection areas A1, A2 are magnified behind the direction of advance of the host vehicle . In the present embodiment, a first adjacent vehicle V2 is thereby detected in a situation in which two adjacent vehicles are traveling in a consecutive manner, as illustrated in Figure 11(A), and when the host vehicle V1 has been determined to have past the first adjacent vehicle V2, the detection area is enlarged backwards with respect to the direction of advance of the vehicle V1, as illustrated in Figure 11(B), whereby the second adjacent vehicle V2' can be detected, and the The driver can thereby be notified that a second adjacent vehicle V2' is present behind the host vehicle. As a result, it is possible to effectively prevent a situation in which the first adjacent vehicle V2 has passed the host vehicle V1, whereby the driver determines that an adjacent vehicle is not present behind the host vehicle, the host vehicle makes a lane change , and host vehicle V1 and second adjacent vehicle V2' approach. <<Mode 2>>
[0109] Next, the three-dimensional object detection device 1a according to the second embodiment is described. The three-dimensional object detection device 1a according to the second embodiment is the same as the first embodiment, except that a computer 30a is provided in place of the computer 30 of the present embodiment, as illustrated in Figure 16, and the operation is as described below. Here, Figure 16 is a block view illustrating the details of computer 30a according to the second embodiment.
[0110] The three-dimensional object detection device 1a according to the second modality is provided with a camera 10 and a computer 30a, as illustrated in Figure 16. The computer 30a is provided with a viewpoint conversion unit 31, a luminance difference calculating unit 35, an edge line detection unit 36, a three-dimensional object detection unit 33a, and a detection area adjustment unit 34. The configuration of the object detection device three-dimensional 1a according to the second embodiment is described below.
[0111] Figure 17 is a view illustrating the image variation of the camera 10 in Figure 16, Figure 17(a) is a plan view, and Figure 17(b) is a perspective view in real space to the rear of the host vehicle V1. The camera 10 is set to a predetermined angle of view a, and the rear side of the host vehicle V1 included in the predetermined angle of view a is captured, as illustrated in Figure 17(a). The angle of view of camera 10 is adjusted so that adjacent lanes are included in the capture range of camera 10 in addition to the lane the host vehicle is traveling on, in the same manner as illustrated in Figure 2.
[0112] The detection areas A1, A2 in this example are trapezoidal in a plan view (aerial view state), and the position, size, and shape of the detection areas A1, A2 are decided based on distances d1 to d4. The detection areas A1, A2 of the example illustrated in the drawing are not limited to being trapezoidal, and may also be rectangular or otherwise in an aerial view state, as illustrated in Figure 2.
[0113] Here, distance d1 is the distance from the host vehicle V1 to the ground lines L1, L2. Ground lines L1, L2 refer to a line in which a three-dimensional object, which is present in a lane adjacent to the lane in which host vehicle V1 is traveling, is in contact with the ground. In the present embodiment, an object must detect an adjacent vehicle V2 or the like (including two wheeled vehicles or the like) that travels in the left or right lane behind host vehicle V1 and adjacent to the lane of host vehicle V1. Therefore, the distance V1, which is the position of the ground lines L1, L2 of the adjacent vehicle V2, can be decided to be substantially fixed from the distance d11 of a host vehicle V1 to a white line W and the distance d12 from the white line W to the position at which the adjacent vehicle V2 is expected to travel.
[0114] The distance d1 is not limited to be fixedly decided, and may be variable. In that case, the computer 30a recognizes the position of the white line W with respect to the host vehicle V1 using white line recognition or another technique, and the distance d11 is decided based on the position of the recognized white line W. The distance d1 is thus variably adjusted using the decided distance d11. In the present embodiment described below, the position at which the adjacent vehicle V2 is traveling (distance d12 from the white line W) and the position at which the host vehicle V1 is traveling (the distance d11 from the white line W) is more predictable and the distance d1 is fixedly decided.
[0115] A distance d2 is the distance that extends from the rear end of the host vehicle V1 in the vehicle's forward direction. The distance d2 is decided so that the detection areas A1, A2 are accommodated within at least the viewing angle a of the camera 10. In the present particular embodiment, the distance d2 is adjusted so as to be in contact with a divided range within the viewing angle a. The distance d3 is decided based on the size of the three-dimensional object to be detected. In the present embodiment, the object to be detected is an adjacent vehicle V2 or the like, and therefore the distance d3 is set to an extent that includes the adjacent vehicle V2.
[0116] The distance d4 indicates the height, which has been adjusted so that the tires of the adjacent vehicle V2 or the like are included in real space, as illustrated in Figure 17(b). In an aerial view image, distance d4 is the extent illustrated in Figure 17(a). Distance d4 can also be a distance that does not include lanes also adjacent to the left and right lanes in the aerial view image (i.e. lanes adjacent two lanes apart). This is because when lanes two lanes away from the lane of host vehicle V1 are included, it is no longer possible to distinguish whether an adjacent vehicle V2 is present in the adjacent lanes to the left and right of the lane in which host vehicle V1 is. traversing, or if an adjacent vehicle is present in an adjacent lane two lanes apart.
[0117] As described above, the distances d1 to d4 are decided, and the position, size and shape of the detection areas A1, A2 are decided thereby. Specifically, the position of the upper side b1 of detection areas A1, A2 that form a trapezoid is decided by distance d1. The starting position C1 of the upper side b1 is decided by the distance d2. The end position C2 of the upper side b1 is decided by the distance d3. The lateral side b2 of detection areas A1, A2 forming a trapezoid is decided by a straight line L3 extending from camera 10 towards start position C1. Similarly, the lateral side b3 of detection areas A1, A2 that form a trapezoid is decided by a straight line L4 that extends from camera 10 towards end position C2. The position of the lower side b4 of detection areas A1, A2 that were a trapezoid is decided by distance d4. In this way, the areas surrounded by sides b1 to b4 are the detection areas A1, A2. Detection areas A1, A2 are regular squares (rectangles) in real space behind host vehicle V1, as illustrated in Figure 17(b).
[0118] Returning to Figure 16, the viewpoint conversion unit 31 accepts input of the image data captured from a predetermined area captured by the camera 10. The viewpoint conversion unit 31 converts the viewpoint data captured image data entered into the aerial view image data, which is an aerial view state. An aerial view state is a viewing state from the point of view of an imaginary camera that is looking up and down, for example vertically down (or slightly tilted down). Viewpoint conversion can be performed using the technique described, for example, in Japanese Public Patent Application 2008219063.
[0119] The luminance difference calculation unit 35 calculates the luminance differences in the aerial view image data, which has been subjected to viewpoint conversion by the viewpoint conversion unit 31, to detect the edges of a three-dimensional object included in aerial view image. The luminance difference calculating unit 35 calculates, for each of the plurality of positions along a perpendicular imaginary line extending along the perpendicular direction in real space, the luminance difference between two pixels close to each position. The luminance difference calculating unit 35 is capable of calculating the luminance difference by a method for fitting a single perpendicular imaginary line extending in the perpendicular direction in real space, or a method for fitting two perpendicular imaginary lines.
[0120] The specific method for adjusting two perpendicular imaginary lines is described below. The luminance difference calculation unit 35 adjusts a perpendicular imaginary first line which corresponds to a line segment extending in the perpendicular direction in real space, and a perpendicular imaginary second line which is different from the perpendicular imaginary first line and which corresponds to the line segment that extends in the perpendicular direction in real space. The luminance difference calculating unit 35 determines the luminance difference between a point on the first perpendicular imaginary line and a point on the second perpendicular imaginary line continuously along the first perpendicular imaginary line and the second perpendicular imaginary line. The operation of the luminance difference calculating unit 35 is described in detail below.
[0121] The luminance difference calculation unit 35 sets a first perpendicular imaginary line La (described below as attention line La) which corresponds to a line segment extending in the perpendicular direction in real space and passing through the area detection rate A1, as illustrated in Figure 18(a). The luminance difference calculation unit 35 sets a second perpendicular imaginary line Lr (referred to below as the reference line Lr) which is different from the attention line La, corresponds to the line segment extending in the perpendicular direction in real space, and passes through detection area A1. Here, the reference line Lr is set to a position at a distance from the attention line La by a predetermined distance in real space. The lines that correspond to the line segments that extend in the perpendicular direction in real space are lines that spread out in the radial direction from the camera 10 position Ps in an aerial view image. These lines that spread out in the radial direction are lines that follow the falling direction of the three-dimensional object when converted to an aerial view.
[0122] The luminance difference calculation unit 35 sets an attention point Pa on the attention line La (a point on the first perpendicular imaginary line). The luminance difference calculation unit 35 sets a reference point Pr on the reference line Lr (a point on the second perpendicular imaginary line). Attention line La, attention point Pa, reference line Reference line, and reference point Reference point have the real-space relationship illustrated in Figure 18(b). It is evident from Figure 18(b) that the attention line La and the reference line are lines that extend in the perpendicular direction in real space, and the attention point Pa and the reference point Reference point Pr are points adjusted to substantially the same height in real space. It is not necessarily required that the attention point Pa and the reference point Pr be kept strictly at the same height, and a certain number of errors for the attention point Pa and the reference point Pr are allowed to be considered at the same height.
[0123] The luminance difference calculation unit 35 determines the luminance difference between the attention point Pa and the reference point Pr. If the luminance difference between the attention point Pa and the reference point Pr is greater, it is possible that there is an edge between the attention point Pa and the reference point Pr. In the second particular modality, an imaginary perpendicular line is fitted as a line segment that extends in the perpendicular direction in real space with respect to the image of aerial view, to detect a three-dimensional object present in detection areas A1, A2. Therefore, there is a high possibility of an edge of a three-dimensional object where the attention line La has been adjusted when the luminance difference between the attention line La and the reference line is high. Therefore, the edge line detection unit 36 illustrated in Figure 16 detects an edge line based on the difference in luminance between the attention point Pa and the reference point Pr.
[0124] This point will be described in more detail. Figure 19 is a view to describe the detailed operation of the luminance difference calculating unit 35. Figure 19(a) illustrates an aerial view image of the aerial view state, and Figure 19(b) is a enlarged view of a portion B1 of the aerial view image illustrated in Figure 19(a). In Figure 19, only detection area A1 is illustrated and described, but the luminance difference is calculated using the same procedure for detection area A2.
[0125] When an adjacent vehicle V2 is being displayed in the captured image captured by camera 10, the adjacent vehicle V2 appears in detection area A1 in the aerial view image as illustrated in Figure 19(a). Attention line La is fitted to a rubber part of an adjacent vehicle tire V2 in the aerial view image in Figure 19(b), as illustrated in the enlarged view of area B1 in Figure 19(a). In this state, the luminance difference calculation unit 35 adjusts the reference line. The reference line is snapped along the perpendicular direction at a position set a predetermined distance in real space from the attention line La. Specifically, in the three-dimensional object detection device 1a according to the present embodiment, the reference line is set to a position at a distance of 10 cm away in real space from the attention line La. The reference line is hereby fitted to the tire wheel of the adjacent vehicle v2 set, for example, at a distance corresponding to 10 cm from the tire rubber of the adjacent vehicle V2 in the aerial view image.
[0126] Then the luminance difference calculation unit 35 adjusts a plurality of attention points Pa1 to PaN in the attention line La. In Figure 19(b), six attention points Pa1 to Pa6 (referred to below as the attention point Pai when indicating an arbitrary point) are fitted for convenience of description. An arbitrary number of attention points Pa can be set on the attention line La. In the description below, the N attention points Pa are fitted to the attention line La.
[0127] The luminance difference calculation unit 35 subsequently adjusts the reference points PrN so as to have the same height as the attention points Pa1 to PaN in real space. The luminance difference calculation unit 35 calculates the luminance difference between the pairs of the attention point Pa and the reference point Pr at the same height. The luminance difference calculating unit 35 thereby calculates the luminance difference between two pixels for each of the plurality of positions (1 - N) along the perpendicular imaginary line extending in the perpendicular direction in real space. The luminance difference calculation unit 35 calculates the luminance difference between, for example, a first attention point Pa1 and a first reference point Prl, and calculates the luminance difference between a second attention point Pa2 and a second point of attention. reference Pr2. The luminance difference calculation unit 35 thereby determines the luminance difference continuously along the attention line La and the reference line Reference line. In other words, the luminance difference calculation unit 35 sequentially determines the luminance difference between the third to N attention points Pa3 to PaN and from the third to N reference points Pr3 to PrN.
[0128] The luminance difference calculation unit 35 repeats the above described Lr reference line adjustment process, the attention point adjustment Pa, the reference point adjustment Pr, and calculates the luminance difference while shifting attention of the line of attention La within detection area A1. In other words, the luminance difference calculation unit 35 repeatedly performs the above-described process while changing the positions of the attention line La and the reference line Lr by the same distance in real space along the direction in which the ground line L1 extends. The luminance difference calculation unit 35, for example, adjusts the line that was the Lr reference line in the previous process to be the attention line La, adjusts the reference line Lr with respect to the attention line La, and determines sequentially the luminance difference.
[0129] In this way, in the second modality, the determination of the luminance difference from the attention point Pa on the attention line La and the reference point Pr on the reference line Reference line Lr, which are at substantially the same height in the real space, allows the luminance difference to be distinctly detected when an edge extending in the perpendicular direction is present. The accuracy for detecting a three-dimensional object can be increased without affecting the process for detecting the three-dimensional object, even when the three-dimensional object is magnified according to the height of the road surface by converting to an aerial view image to compare the luminance between the perpendicular imaginary lines that extend in the perpendicular direction in real space.
[0130] Returning to Figure 16, the edge line detection unit 36 detects the edge line from the continuous luminance difference calculated by the luminance difference calculation unit 35. For example, in the case illustrated in Figure 19( b), the first attention point Pa1 and the first reference point Pr1 are positioned on the same tire part, and the luminance difference is therefore small. On the other hand, the second to sixth attention points Pa2 to Pa6 are positioned on the rubber parts of the tyre, and the second to sixth reference points Pr2 to Pr6 are positioned on the grip part of the tyre. Therefore, the luminance difference between the second to the sixth attention point Pa2 to Pa6 and the second to the sixth point of re Pr2 to Pr6 is greater. Therefore, the edge line detection unit 36 is able to detect the presence of an edge between the second and sixth point Pa2 to Pa6 and the second to sixth reference point Pr2 to Pr6 where the luminance difference is high .
[0131] Specifically, when an edge line is to be detected, the edge line detection unit 36 first assigns an attribute to the ith Parent attention point from the luminance difference between the ith Parent attention point (coordinates (xi) , Yi)) to the ith Pri reference point (coordinates (xi', yi')) according to formula 1 noted below. [Formula 2]
s(xi, Yi) = 0 when the above is not true.
[0132] In formula 2 above, t represents a predetermined threshold value, I(xi, Yi) represents the vali of the ith reference point Pai, and I(xi', yi') represents the luminance value of the ith reference point Pri. According to formula 2, the attribute s(xi, yi) of the attention point Pai is greater than the luminance value obtained by adding the threshold value t for the reference point Pri. On the other hand, the attribute s(xi, yi) of the attention point Pai is '-1' when the luminance value of the attention point Pai is less than the luminance value obtained by subtracting the threshold value t from the point of reference Pri. The attribute s(xi, yi) of the Pai attention point is ‘0’ when the luminance value of the Pai attention point and the luminance value of the Pri reference point are in a different relationship than the one mentioned above.
[0133] Next, the edge line detection unit 36 evaluates whether the attention line La is an edge line of the continuity c(xi,Yi) of the attribute s along the attention line La based on formula 3 which follows: [Formula 3]
when the above is not true.
[0134] The continuity c(xi, yi) is '1' when the attribute s(xi, yi) of the attention point Pai and the attribute s(xi + 1, yi + 1) of the attention point Pa + 1 adjacent are the same. The continuity c(xi, yi) is '0' when the attribute s(xi, yi) of the attention point Pai and the attribute s(xi + 1, yi + 1) of the adjacent attention point Pa + 1 are not the same.
[0135] Next, the edge line detection unit 36 determines the sum of the continuities c of all the attention points Pa in the attention line La. The edge line detection unit 36 divides the sum of the continuities c thereby determined by the number N of the attention points Pa to thereby normalize the continuity c. The edge line detection unit 36 determines the attention line La to be an edge line when the normalized value has exceeded a threshold value θ. The threshold value θ is adjusted in advance by experimentation or other means.
[0136] In other words, the edge line detection unit 36 determines whether the attention line La is an edge line based on formula 3 noted above. The edge line detection unit 36 then determines whether all attention lines La drawn in detection area A1 are edge lines. [Formula 4]

[0137] Thus, in the second modality, an attribute is assigned to the attention point Pa based on the luminance difference between the attention point Pa in the attention line La and the reference point Pr in the Lr reference line, and is determined whether the line of attention La is a line based on the continuity c of attributes along the line of attention La. Therefore, the boundaries between areas that have high luminance and areas that have low luminance are detected as edge lines, and edges can be detected according to a human's natural senses. The results of the above will be described. Figure 20 is a view illustrating an example image for describing the processing of the edge line detection unit 36. This example image is an image in which a first swath pattern 101 and a second swath pattern 102 are adjacent. therebetween, the first strip pattern 101 indicating a strip pattern in which the high luminance areas and the low luminance areas are repeated, and the second strip pattern 102 indicating a strip pattern in which the low luminance areas and high luminance areas are repeated. Further, in the example picture, the areas of the first swath pattern 101 in which the luminance is high, and the areas of the second swath pattern 102 in which the luminance is low are adjacent to each other, and the areas of the first swath pattern 101 in which the luminance is low, and the areas of the second swath pattern 102 in which the luminance is high are adjacent to each other. The location 103 positioned on the boundary between the first swath pattern 101 and the second swath pattern 102 tends not to be perceived as an edge by the human senses.
[0138] In contrast, as the low luminance areas and the high luminance areas are adjacent to each other, location 103 is recognized as an edge when an edge is detected only by the luminance difference. However, the edge line detection unit 36 evaluates the location 103 to be an edge line only when there is continuity in the luminance difference attributes. Therefore, the edge line detection unit 36 is able to eliminate misjudgment in which the location 103, which is not recognized as an edge line by human senses, is recognized as an edge line, and edges can be detected according to the human senses.
[0139] Returning to Figure 16, the three-dimensional object detection device 33a detects a three-dimensional object based on the amount of edge lines detected by the edge line detection unit 36. As described above, the three-dimensional object detection device 1a according to the present embodiment detects an edge line extending in the perpendicular direction in real space. The detection of many edge lines extending in the perpendicular direction indicates that there is a high probability of the presence of a three-dimensional object in the detection areas A1, A2. Therefore, the three-dimensional object detection device 33a detects a three-dimensional object on the basis of the number of edge lines detected by the edge line detection unit 36. Specifically, the three-dimensional object detection device 33a determines whether the number of edge lines edge detected by the edge line detection unit 36 is a predetermined threshold value β or greater, and when the amount of edge lines is a predetermined threshold value β or greater, the edge lines detected by the edge line detection unit 36 are determined to be edge lines of a three-dimensional object, and a three-dimensional object based on the edge lines is thereby detected to be an adjacent vehicle V2.
[0140] Furthermore, before detecting the three-dimensional object, the three-dimensional object detection device 33a evaluates whether the edge lines detected by the edge line detection unit 36 are correct. The three-dimensional object detection device 33a assesses whether a change in luminance at the edge lines is a predetermined threshold value Tb or greater along the edge lines of the aerial view image. When the change in luminance at the edge lines in the aerial view image is a predetermined threshold value Tb or greater, the edge lines are determined to have been detected by erroneous evaluation. On the other hand, when the change in luminance at the edge lines in the aerial view image is one less than a predetermined threshold Tb, the edge lines are judged to be correct. The threshold value Tb is adjusted in advance by experimentation or other means.
[0141] Figure 21 is a view illustrating the luminance distribution in the edge line, Figure 21(a) illustrates the edge line and the luminance distribution when an adjacent vehicle V2 is present as a three-dimensional object in the area of detection A1, and Figure 21(b) illustrates the edge line and luminance distribution when a three-dimensional object is not present in detection area A1.
[0142] As illustrated in Figure 21(a), it is assumed that the attention line La fitted to the adjacent vehicle rubber portion V2 on an edge line has been determined to be the aerial view image. In this case, the change in luminance in the line of attention La in the aerial view image is gradual. This is due to the conversion of the image captured by the viewpoint camera 10 to an aerial view image, whereby the tire of the adjacent vehicle is enlarged within the aerial view image. On the other hand, the attention line La fitted to the white character portion “50” drawn on the road surface in the aerial view image is assumed to erroneously evaluate as an edge line, as illustrated in Figure 21(b). In this case, the change in luminance in the line of attention La in the aerial view image has considerable ripples. This is because the highway and other low luminance parts are mixed with the high luminance parts in the white characters on the edge line.
[0143] The 33a three-dimensional object detection device assesses whether an edge line has been detected by an erroneous evaluation based on differences in luminance distribution in the line of attention La, as described above. The three-dimensional object detection device 33a determines that the edge line has been detected by misjudgment when the change in luminance along the edge line is a predetermined threshold value Tb or greater, and determines that the edge line is not caused by a three-dimensional object. A reduction in accuracy for three-dimensional object detection is hereby nullified when white characters such as “50” on the road surface, roadside vegetation, and the like are evaluated as edge lines. On the other hand, the three-dimensional object detection device 33a determines that an edge line is the edge line of a three-dimensional object and determines that a three-dimensional object is present when changes in luminance along the edge line are less than a predetermined threshold value Tb.
[0144] Specifically. The three-dimensional object detection device 33a calculates the change in edge line luminance using formula 5 or 6 noted below. The change in edge line luminance corresponds to the real-space evaluation value in the perpendicular direction. Formula 5 evaluates the luminance distribution using the total squared value of the difference between the ith luminance value I(xi, Yi) and the ith + 1 luminance value I(xi +, 1, Yi + 1) on the line of La attention. Formula 6 evaluates the luminance distribution using the total value of the absolute value of the difference between the ith luminance value I(xi, Yi) and the ith + 1 luminance value I(xi +, 1, Yi + 1) adjacent to the La attention line. [Formula 5]
[Formula 6]

[0145] No limitation is imposed on the use of formula 6, and it is also possible to binarize an attribute b of an adjacent luminance value using a threshold value t2, and then sum the binarized attribute b of all points of attention Pa, such as in formula 7 noted below. [Formula 7] Evaluation value in the equivalent perpendicular direction = ∑b(xi, yi) where b(xi, yi) = 1 when |I(xi, yi) - I(xi + 1, yi + 1)| > t2 and b(xi, yi) = 0 when the above is not true.
[0146] The attribute b(xi, yi) of the attention point Pa(xi, yi) is '1' when the absolute value of the luminance difference between the luminance value of the attention point Pai and the luminance value of the point reference value Pri is greater than a threshold t2. When the above relationship is not true, the attribute b(xi, yi) of the Parent attention point is '0'. The threshold value t2 is adjusted in advance by experimentation or other means so that the line of attention La is not evaluated as being on the same three-dimensional object. The three-dimensional object detection device 33a then sums the attribute b for all attention points Pa in the attention line La and determines the evaluation value in the equivalent perpendicular direction to thereby assess whether an edge line is caused by a three-dimensional object and that a three-dimensional object is present.
[0147] Thus, the edge waveform is a mode of pixel distribution information that indicates a predetermined luminance difference, and the "pixel distribution information" in the present embodiment can be positioned information that indicates the state of the distribution of “pixels that have a luminance difference and a predetermined threshold value or greater” as detected along the direction in which the three-dimensional object tips when the captured image is converted into viewpoint to an aerial view image. In other words, the three-dimensional object detection device 33a detects a three-dimensional object in an aerial view image obtained by the viewpoint conversion unit 31, on the basis of pixel distribution information having a luminance difference in a predetermined threshold value or greater along the direction in which the three-dimensional object tips when the viewpoint is corrected to an aerial view image.
[0148] The detection area adjustment unit 34 determines whether the adjacent vehicle has passed the host vehicle in the same manner as in the first mode, and when it has been determined that the adjacent vehicle has passed the host vehicle, the detection areas A1, A2 are zoomed back with respect to the direction of advance of the vehicle.
[0149] Next, the method for detecting an adjacent vehicle according to the second modality will be described. Figure 22 is a flowchart illustrating the details of the method for detecting an adjacent vehicle according to the second embodiment. In Figure 22, the process involved with detection area A1 will be described for convenience, but the same process is also performed for detection area A2.
[0150] First, in step S301, the detection areas A1, A2 for detecting the adjacent vehicle are set in the same way as in step S101 of the first mode. In step S301, the detection areas set in the detection area setting process illustrated in Figure 14 are set in the same way as in the first mode.
[0151] In step S302, a predetermined area specified by the viewing angle a and the fixation position is captured by the camera 10, and the image data of the captured image, captured by the camera 10, is acquired by the computer 30a. Thereafter, the viewpoint conversion unit 31 converts the viewpoint from the acquired image data and generates the aerial view image data in step S303.
[0152] Then, in step S304, the luminance difference calculation unit 35 adjusts the reference line Lr and the attention line La in the detection area A1. At this time, the luminance difference calculation unit 35 adjusts a corresponding line to a line extending in the perpendicular direction in real space as the attention line La, and adjusts, as the reference line Lr, a line corresponding to a line extending in a perpendicular direction in real space and which is separated by a predetermined distance in real space from the line of attention La.
[0153] Then, in step S305, the luminance difference calculation unit 35 adjusts a plurality of attention points on the attention line La, and adjusts the reference points Pr so that the attention points Pa and the Pr reference points are at substantially the same height in real space. The attention points Pa and the reference points Pr thus align substantially in the horizontal direction, and the edge line extending in the perpendicular direction in real space is more easily detected. The luminance difference calculation unit 35 adjusts a certain number of attention points Pa which will not be problematic during edge detection by the edge line detection unit 36.
[0154] Then, in step S306, the luminance difference calculation unit 35 calculates the luminance difference between the attention points Pa and the reference points Pr at the same height in real space. The edge line detection unit 36 calculates the attribute s of the attention points Pa according to formula 2 described above. In step S307, the edge line detection unit 36 then calculates the continuity c of attribute s of the attention points Pa according to the formula 3. In step S308, the edge line detection unit 36, in addition, evaluates whether a value obtained by normalizing the continuity sum c is greater than a threshold value θ according to formula 4. When it has been determined that the normalized value is greater than the threshold value θ (step S308 = Yes), the edge line detection unit 36 detects the attention line La as the edge line in step S309. The process then proceeds to step S310. When it has been determined that the normalized value is not greater than the threshold value θ (step S308 = No), the edge line detection unit 36 does not detect that the attention line La is an edge line, and the process continues to step S310.
[0155] In step S310, the computer 30a determines whether the processes of steps S304 to S310 have been performed for the attention lines La that can be adjusted in the detection area A1. When it has been determined that the above processes have not been performed for all La attention lines (step S310 = No), the process returns to step S304, sets a new La attention lines, and repeats the process through step S311. On the other hand, when it has been determined that the processes have been performed for all La attention lines (step S310 = Yes), the process proceeds to step S311.
[0156] In step S311, the three-dimensional object detection device 33a calculates the change in luminance along the edge line for each edge line detected in step S309. The three-dimensional object detection device 33a calculates the change in luminance of the edge lines according to any one of formulas 5, 6 and 7. Then, in step S312, the three-dimensional object detection device 33a excludes from among the lines edge lines where the change in luminance is at a predetermined threshold value Tb or greater. In other words, when an edge line that has a large change in luminance is not evaluated as a correct edge line, the edge line is not used to detect a three-dimensional object. As illustrated above, this is done to eliminate detection of characters on the road surface, roadside vegetation, and the like included in detection area A1 such as edge lines. Therefore, the predetermined threshold value Tb is determined by experimentation or other means in advance, and is adjusted based on the change in luminance that occurs due to characters on the road surface, roadside vegetation, and the like. On the other hand, the three-dimensional object detection device 33a determines an edge line that has a change in luminance that is less than a predetermined threshold value Tb as the edge line of a three-dimensional object, and thereby , detects the three-dimensional object present in an adjacent vehicle.
[0157] Then, in step S313, it is determined by the three-dimensional object detection unit 33a whether the amount of edge lines is a threshold value β or higher. When it has been judged that the amount of edge lines is not a threshold value β or higher (step S313 = Yes), the three-dimensional object detection device 33a judges in step S314 that an adjacent vehicle is present in the detection area A1. In the subsequent step S315, the notification device 50 provides notification that an adjacent vehicle is present behind the host vehicle. On the other hand, when it was assessed that the amount of edge lines is not at a threshold value β or higher (step S313 = No), the three-dimensional object detection device 33a assesses in step S316 that an adjacent vehicle is not present. in detection area A1. Therefore, the process illustrated in Figure 22 is completed.
[0158] In the second mode, the detection area adjustment process illustrated in Figure 14 is performed in parallel to the adjacent vehicle detection process illustrated in Figure 22, in the same way as in the first mode. The adjacent vehicle detection area shown in Figure 22 is realized in the detection areas set by the detection area adjustment process.
[0159] As described above, in the second modality, the captured image is converted to an aerial view image, and the edge information of the three-dimensional object is detected from the thus converted aerial view image, and an adjacent vehicle present in a adjacent track is thus detected. Still, in the second mode, when it has been assessed that the adjacent vehicle has passed the host vehicle, the detection areas A1, A2 are extended backwards with respect to the vehicle advance orientation, in the same way as in the first mode, by means of that, in addition to the effects of the first mode, it is possible to adequately detect that the second adjacent vehicle behind the first adjacent vehicle has passed the host vehicle when two adjacent vehicles are running consecutively, even when an adjacent vehicle is detected based on information from edge.
[0160] The modalities described above are described to facilitate the understanding of the present invention, and are not described to limit the present invention. Therefore, the elements described in the above embodiments are intended to include all modifications and design equivalences thereof that fall within the technical variation of the present invention.
[0161] For example, in the modality described above, when the host vehicle V1 is changing direction, as illustrated in Figure 12(A), a configuration has been provided as an example in which the amount by which the detection area A2 is enlarged back is reduced so that the detection area does not enter the lane on which the host vehicle is traveling, but no limitation is imposed thereby, and it is also possible to use a configuration in which, for example, the detection area A2 is not zoomed back, and instead the detection area A2 is narrowed forward when the host vehicle V1 is changing direction, as illustrated in Figure 23(A), even when the first adjacent vehicle V2 is passing the host vehicle V1. In other words, it is possible to use a setting in which the extent of detection area A2 set within the turn in the forward direction in host vehicle V1 can be set to be smaller than the extent of detection area A' set outside of gyro in the direction of advance of host vehicle V1 (the length adjusted in advance when host vehicle V1 is not being passed by adjacent vehicle V2), as illustrated in Figure 23(A). It is also possible to use a configuration in which the detection areas A1, A2 are rotated and adjusted so as to tilt to the turning direction with respect to the advance direction of the host vehicle V1, as illustrated in Figure 23(B), so that detection areas A1, A2 do not enter the lane on which the host vehicle is traveling. Therefore, it is possible to use a configuration in which detection areas A1, A2 are magnified to the rear of the host vehicle within a range in which an adjacent rear vehicle V3 traveling on the lane the host vehicle is traveling on is not detected, although that the detection areas will enter the lane on which the host vehicle is traveling.
[0162] Still, in the modalities described above, a configuration was provided as an example in which the determination if the host vehicle is changing direction is made based on the road format predicted from the captured image, captured by the camera 10 or by the steering angle detected by the steering angle sensor 40. However, no limitation is imposed by this configuration, and, for example, it is also possible to acquire the radius of curvature of the curve in which the host vehicle is traveling from the map information obtained by a navigation device, and thereby determines whether the host vehicle is changing direction. In addition, it can be determined whether the host vehicle is turning based on the yaw rate and speed of the host vehicle.
[0163] In addition, in the above-described embodiments, a configuration has been provided as an example in which the detection areas are narrowed forward with respect to the direction of advance of the vehicle when a predetermined period of time has elapsed after the adjacent vehicle cannot more be detected in the detection areas. However, no limitation is imposed by this configuration, and it is also possible to use a configuration in which the detection areas A1, A2 are narrowed forward with respect to the direction of advance of the vehicle when the host vehicle has traveled a predetermined distance after a adjacent vehicle can no longer be detected in detection areas A1, A2. Also, in this case, it is possible to use a setting in which the predetermined distance is modified based on the relative movement speed of the adjacent vehicle V2 with respect to the adjacent vehicle V1. For example, the detection area adjustment unit 34 can be configured so as to predict that the relative motion speed of the second adjacent vehicle V2' is greater in com-measured mode for a greater relative motion speed of the first adjacent vehicle V2 with respect to to the host vehicle V1 and that the second adjacent vehicle will soon reach the host vehicle, and to increase the predetermined distance. Alternatively, it is possible to use a setting in which it is determined that the time the adjacent vehicle remains behind the host vehicle is shorter in com-measured mode for a higher relative movement speed of the first adjacent vehicle V2 relative to the host vehicle V1, and reduces the predetermined distance. Therefore, it is possible to use a configuration in which the speed of the host vehicle is considered and it is predicted that the second adjacent vehicle V2' will not reach the host vehicle when the speed of the host vehicle is high enough and the distance between the first adjacent vehicle V2 and the second adjacent vehicle V2' is large, and the predetermined distance is not increased, even when, for example, the relative movement speed of the first adjacent vehicle V2 with respect to the host vehicle V1 is high. In this way, adjusting the predetermined distance in accordance with the travel state of the host vehicle allows the second adjacent vehicle V2' to be properly detected.
[0164] In the modalities described above, a configuration has been provided as an example in which the detection areas are enlarged backwards with respect to the direction of advance of the vehicle, but it is possible to use a configuration in which the detection areas are enlarged by a Single-stage or detection areas are gradually widened when detection areas A1, A2 are not to be widened with respect to the forward direction of the vehicle.
[0165] The camera 10 in the modalities described above corresponds to the image capture unit of the present invention. The viewpoint conversion unit 31 corresponds to the image capture unit of the present invention, the alignment unit 32, the three-dimensional object detection device 33, 33a, the luminance difference calculating unit 35, and the unit line detection device 36 corresponds to the three-dimensional object detection device of the present invention. The three-dimensional object sensing device 33, 33a corresponds to the relative motion velocity calculating device of the present invention, and the sensing area adjusting unit 34 corresponds to the sensing area adjusting device and the sensing device of the turning behavior of the present invention. Reference Signal List 1, 1a: Three-dimensional object detection device 10: Camera 20: Speed sensor 30, 30a: Computer 31: Point of view conversion unit 32: Alignment unit 33, 33a: Speed detection device three-dimensional object 34: Detection area adjustment unit 35: Luminance difference calculation unit 36: Edge line detection unit 40: Steering angle sensor 50: Notification device a: Viewing angle A1, A2: Detection area CP: Crossing point DP: pixel difference DWt, DWt': Difference waveform DWt1 to DWm, DWm+ka DWtn.: Small areas L1, L2: Floor line La, Lb: Line in direction na which three-dimensional object falls PBt: Aerial view image PDt: Difference image V1: Host vehicle V2: Adjacent vehicle
权利要求:
Claims (24)
[0001]
1. Three-dimensional object detection device, CHARACTERIZED by comprising: an image capture unit arranged to capture images of an area behind a host vehicle equipped with the three-dimensional object detection device; and a computer which is programmed to include: a detection area adjustment unit programmed to adjust a predetermined detection area in a lateral direction behind the host vehicle; an image conversion unit programmed to convert the viewpoint of the captured image obtained by the image capture unit to create aerial view images; a three-dimensional object detection unit programmed to detect a presence of a three-dimensional object within the predetermined detection area by vehicle width direction detection processing wherein aerial view images obtained at different times by the image conversion unit are aligned, programmed to form a frequency distribution counting a number of pixels that indicate a predetermined difference in a difference image from the aerial view images to form difference waveform information, where pixels are counted along lines in a direction in which the three-dimensional object tips, and programmed to detect the presence of the three-dimensional object within predetermined detection areas based on the difference waveform information, when a peak in the difference waveform information is determined to be at a threshold value or greater; and a relative motion velocity calculating unit programmed to calculate a relative motion velocity of the three-dimensional object based on the difference waveform information generated by the three-dimensional object detection unit, the relative motion velocity being a velocity of the object. three-dimensional with respect to a speed of the host vehicle, the detection area adjusting unit extending the predetermined detection area backwards with respect to a direction of advance of the vehicle when the three-dimensional object is detected in the predetermined detection area by the detection unit of three-dimensional object and the relative movement speed of the three-dimensional object is a predetermined value or greater.
[0002]
2. Three-dimensional object detection device, CHARACTERIZED by comprising: an image capture unit arranged to capture images of an area behind a host vehicle equipped with the three-dimensional object detection device; and a computer which is programmed to include: a detection area adjustment unit programmed to adjust a predetermined detection area in a lateral direction behind the host vehicle; an image conversion unit programmed to convert the viewpoint of the captured image obtained by the image capture unit means to create aerial view images; a three-dimensional object detection unit programmed to detect a presence of a three-dimensional object within the predetermined detection area based on edge information from the aerial view image images obtained by the image conversion means unit; and a relative motion velocity calculating unit programmed to calculate a relative motion velocity of the three-dimensional object based on the edge information detected by the three-dimensional object detection unit, the relative motion velocity being a velocity of the three-dimensional object with respect to a speed of the host vehicle, the detection area adjusting unit extending the predetermined detection area backwards with respect to a direction of advancement of the vehicle when the three-dimensional object is detected in the predetermined detection area by the three-dimensional object detection unit and the relative movement speed of the three-dimensional object is a predetermined value or greater.
[0003]
3. Three-dimensional object detection device according to claim 1, CHARACTERIZED by the fact that the detection area adjustment unit widens the predetermined detection area backwards with respect to the direction of advance of the vehicle, and then narrows it down the predetermined detection area forward with respect to the vehicle's forward direction when the three-dimensional object is no longer detected.
[0004]
4. Three-dimensional object detection device according to claim 3, CHARACTERIZED by the fact that the detection area adjustment unit extends the predetermined detection area backwards with respect to the direction of advance of the vehicle, and then does not narrows the predetermined detection area forward with respect to the vehicle's forward direction until a predetermined period of time elapses after the three-dimensional object is no longer detected.
[0005]
5. Three-dimensional object detection device according to claim 4, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined time to be shorter in correspondence with a higher relative motion speed of the three-dimensional object of the host vehicle.
[0006]
6. Three-dimensional object detection device according to claim 4, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined time to be longer in correspondence with a higher relative movement speed of the three-dimensional object.
[0007]
7. Three-dimensional object detection device according to claim 3, CHARACTERIZED by the fact that the detection area adjustment unit extends the predetermined detection area backwards with respect to the direction of advance of the vehicle, and then does not narrows the predetermined detection area forward with respect to the vehicle's direction of advance until the host vehicle has traveled a predetermined distance after the three-dimensional object is no longer detected.
[0008]
8. Three-dimensional object detection device, according to claim 7, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined distance to be smaller in correspondence with a higher relative motion speed of the three-dimensional object.
[0009]
9. Three-dimensional object detection device, according to claim 7, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined distance to be greater in correspondence with a higher relative motion speed of the three-dimensional object.
[0010]
10. Three-dimensional object detection device according to claim 3, CHARACTERIZED by the fact that the detection area adjusting unit enlarges the predetermined detection area by a first speed when the predetermined detection area is to be enlarged backwards with respect to the forward direction of the vehicle, and narrows the predetermined sensing area by a second speed that is less than the first speed when the predetermined sensing area is to be narrowed forward in the forward direction of the host vehicle, wherein the first velocity corresponds to the relative movement velocity of the three-dimensional object, and the second velocity corresponds to a velocity of the host vehicle.
[0011]
11. Three-dimensional object detection device according to claim 1, CHARACTERIZED by the fact that the computer is further programmed to include a turn behavior detection unit programmed to detect turn behavior of the host vehicle, the adjustment unit detection area determines whether the host vehicle is turning, based on the turning behavior of the host vehicle when the predetermined detection area is to be zoomed back with respect to the direction of advance of the vehicle, and reduces an amount by which the area of predetermined detection should be amplified corresponding to a smaller turn radius of the host vehicle as determined from the turning behavior of the host vehicle when the host vehicle is determined to turn.
[0012]
12. Three-dimensional object detection device, CHARACTERIZED by comprising: an image capture unit arranged to capture an image of an area behind a host vehicle equipped with the three-dimensional object detection device; and a computer which is programmed to include: a detection area adjustment unit programmed to adjust a predetermined detection area in a lateral direction behind the host vehicle; an image conversion unit programmed to convert the viewpoint of the captured image obtained by the image capture unit to create an aerial view image; a three-dimensional object detection unit programmed to detect a presence of a three-dimensional object within the predetermined detection area of the aerial view image obtained by the image conversion unit based on pixel distribution information whose luminance difference is in a predetermined threshold value or greater along a direction in which the three-dimensional object tips when the viewpoint is converted to aerial view image; and a relative motion velocity calculating unit programmed to calculate a relative motion velocity of the three-dimensional object based on a change in time of the pixel distribution information, the relative motion velocity being a velocity of the three-dimensional object with respect to a speed of the host vehicle, the detection area adjusting unit extending the predetermined detection area backwards with respect to a direction of advance of the vehicle when the three-dimensional object is detected in the predetermined detection area by the three-dimensional object detection unit and the The three-dimensional object's relative motion speed is a predetermined value or greater.
[0013]
13. Three-dimensional object detection device according to claim 2, CHARACTERIZED by the fact that the detection area adjustment unit widens the predetermined detection area backwards with respect to the direction of advance of the vehicle, and then narrows the predetermined detection area forward with respect to the vehicle's forward direction when the three-dimensional object is no longer detected.
[0014]
14. Three-dimensional object detection device according to claim 13, CHARACTERIZED by the fact that the detection area adjustment unit extends the predetermined detection area backwards with respect to the direction of advance of the vehicle, and then does not narrows the predetermined detection area forward with respect to the vehicle's forward direction until a predetermined period of time elapses after the three-dimensional object is no longer detected.
[0015]
15. Three-dimensional object detection device according to claim 14, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined time to be shorter in correspondence with a higher relative motion speed of the three-dimensional object .
[0016]
16. Three-dimensional object detection device according to claim 14, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined time to be longer in correspondence with a higher relative motion speed of the three-dimensional object .
[0017]
17. Three-dimensional object detection device according to claim 13, CHARACTERIZED by the fact that the detection area adjustment unit extends the predetermined detection area backwards with respect to the direction of advance of the vehicle, and then does not narrows the predetermined detection area forward with respect to the vehicle's direction of advance until the host vehicle has traveled a predetermined distance after the three-dimensional object is no longer detected.
[0018]
18. Three-dimensional object detection device according to claim 17, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined distance to be smaller in correspondence with a higher relative motion speed of the three-dimensional object.
[0019]
19. Three-dimensional object detection device according to claim 17, CHARACTERIZED by the fact that the detection area adjustment unit adjusts the predetermined distance to be greater in correspondence with a higher relative motion speed of the three-dimensional object.
[0020]
20. Three-dimensional object detection device according to claim 13, CHARACTERIZED by the fact that the detection area adjusting unit enlarges the predetermined detection area by a first speed when the predetermined detection area is to be enlarged backwards with respect to the forward direction of the vehicle, and narrows the predetermined sensing area by a second speed that is less than the first speed when the predetermined sensing area is to be narrowed forward in the forward direction of the host vehicle forward, at that the first velocity corresponds to the relative movement velocity of the three-dimensional object, and the second velocity corresponds to a velocity of the host vehicle.
[0021]
21. Three-dimensional object detection device according to claim 2, CHARACTERIZED by the fact that the computer is further programmed to include a turn behavior detection unit programmed to detect turn behavior of the host vehicle, the adjustment unit detection area determines whether the host vehicle is turning, based on the turning behavior of the host vehicle when the predetermined detection area is to be zoomed back with respect to the direction of advance of the vehicle, and reduces an amount by which the area of predetermined detection should be amplified corresponding to a smaller turn radius of the host vehicle as determined from the turning behavior of the host vehicle when the host vehicle is determined to turn.
[0022]
22. Three-dimensional object detection device, CHARACTERIZED by comprising: an image capture unit arranged to capture images of an area behind a host vehicle equipped with the three-dimensional object detection device; and a computer which is programmed to include: a detection area adjustment unit programmed to adjust a predetermined detection area in a lateral direction behind the host vehicle; an image conversion unit programmed to convert the viewpoint of the captured image obtained by the image capture unit to create aerial view images; a three-dimensional object detection unit programmed to detect a presence of a three-dimensional object within the predetermined detection area by vehicle width direction detection processing wherein aerial view images obtained at different times by the image conversion unit are aligned, programmed to form a frequency distribution counting a number of pixels that indicate a predetermined difference in a difference image from the aerial view images to form difference waveform information, where pixels are counted along lines in a direction in which the three-dimensional object tips, and programmed to detect the presence of the three-dimensional object within predetermined detection areas based on the difference waveform information, when a peak in the difference waveform information is determined to be at a threshold value or greater; and a relative motion velocity calculating unit programmed to calculate a relative motion velocity of the three-dimensional object with respect to the host vehicle on the basis of the difference waveform information generated by the three-dimensional object detection unit, the detection area extending the predetermined detection area backwards with respect to a direction of advancement of the vehicle when the three-dimensional object is detected in the predetermined detection area by the three-dimensional object detection unit and the relative movement speed of the three-dimensional object is in value predetermined or larger, the sensing area adjustment unit widening the predetermined sensing area backwards in relation to the direction of advancement of the vehicle and subsequently restricting the predetermined sensing area forwards in relation to the direction of advancement of the vehicle, when a predetermined time elapses after the three-dimensional object is no longer detected, the detection area adjustment unit extending the predetermined detection area backwards in relation to the direction of advancement of the vehicle and subsequently not restricting the predetermined detection area forwards in relation to the direction of advancement of the vehicle, when a predetermined time elapses after the three-dimensional object is no longer detected.
[0023]
23. Three-dimensional object detection device, CHARACTERIZED by comprising: an image capture unit arranged to capture images of an area behind a host vehicle equipped with the three-dimensional object detection device; and a computer which is programmed to include: a detection area adjustment unit programmed to adjust a predetermined detection area in a lateral direction behind the host vehicle; an image conversion unit programmed to convert the viewpoint of the captured image obtained by an image capture unit means to create aerial view images; a three-dimensional object detection unit programmed to detect a presence of a three-dimensional object within the predetermined detection area based on edge information from aerial view images obtained by the image conversion means unit; and a relative motion velocity calculating unit programmed to calculate a relative motion velocity of the three-dimensional object with respect to the host vehicle on the basis of edge information detected by the three-dimensional object detection unit, the detection area adjustment unit extending the predetermined detection area backwards with respect to a direction of advancement of the vehicle when the three-dimensional object is detected in the predetermined detection area by the three-dimensional object detection unit and the relative motion speed of the three-dimensional object is a predetermined value or greater , the detection area adjustment unit by widening the predetermined detection area backwards with respect to the direction of advancement of the vehicle and subsequently restricting the predetermined detection area forwards with respect to the direction of advancement of the vehicle, when a predetermined time takes place after the three-dimensional object is no longer detected, the unit detection area adjustment by widening the predetermined detection area backwards in relation to the direction of advancement of the vehicle and subsequently not restricting the predetermined detection area forwards in relation to the direction of advancement of the vehicle until a predetermined time elapses after the three-dimensional object is no longer detected.
[0024]
24. Three-dimensional object detection device, CHARACTERIZED by comprising: an image capture unit arranged to capture an image of an area behind a host vehicle equipped with the three-dimensional object detection device; and a computer which is programmed to include: a detection area adjustment unit programmed to adjust a predetermined detection area in a lateral direction behind the host vehicle; an image conversion unit programmed to convert the viewpoint of the captured image obtained by the image capture unit to create an aerial view image; a three-dimensional object detection unit programmed to detect a presence of a three-dimensional object within the predetermined detection area in the aerial view image obtained by the image conversion unit on the basis of pixel distribution information whose luminance difference is in a predetermined threshold value or greater along a direction in which the three-dimensional object topples when the viewpoint is converted to the aerial view image; and a relative motion velocity calculating unit programmed to calculate a relative motion velocity of the three-dimensional object with respect to the host vehicle based on a change in time of the pixel distribution information, the detection area adjustment unit extending the Backward predetermined detection area with respect to a direction of advancement of the vehicle when the three-dimensional object is detected in the predetermined detection area by the three-dimensional object detection unit and the relative motion speed of the three-dimensional object is a predetermined value or greater, the detection area adjustment unit by widening the predetermined detection area backwards in relation to the direction of advance of the vehicle and subsequently narrowing the predetermined area of detection forward in relation to the direction of advance of the vehicle, when a predetermined time elapses after the three-dimensional object is no longer detected, the area adjustment unit of detection by widening the predetermined detection area backwards in relation to the direction of advance of the vehicle and subsequently not narrowing the predetermined area of detection forwards in relation to the direction of advance of the vehicle, when a predetermined time elapses after the three-dimensional object does not be further detected.
类似技术:
公开号 | 公开日 | 专利标题
BR112014020316B1|2021-09-14|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
US9591274B2|2017-03-07|Three-dimensional object detection device, and three-dimensional object detection method
BR112014020404B1|2021-08-31|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
RU2563534C1|2015-09-20|Detector of solid objects and detection of solid objects
RU2636120C2|2017-11-20|Three-dimensional object detecting device
BR112014020353B1|2021-08-31|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
BR112014001837B1|2021-04-20|moving body detection devices and methods
JP5943077B2|2016-06-29|Three-dimensional object detection apparatus and three-dimensional object detection method
RU2619724C2|2017-05-17|Device for detecting three-dimensional objects
BR112014020406B1|2021-08-31|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
BR112014020407B1|2021-09-14|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
BR112015001861A2|2019-12-31|Solid thing detecting device
JP5888275B2|2016-03-16|Road edge detection system, method and program
JP5835459B2|2015-12-24|Three-dimensional object detection device
BR112015001747B1|2021-11-16|THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION METHOD
BR112014020410B1|2021-10-05|THREE-DIMENSIONAL OBJECT DETECTION DEVICE
BR112015001711B1|2021-10-26|THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION METHOD
BR112015001745B1|2021-11-23|THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND THREE-DIMENSIONAL OBJECT DETECTION METHOD
BR112015001872B1|2021-11-03|VEHICLE IMAGE RECOGNITOR
BR112014002366B1|2021-10-05|THREE-DIMENSIONAL OBJECT DETECTION DEVICE AND METHOD
BR112015001711A2|2019-11-26|A solid thing detecting device and a solid thing detecting method
同族专利:
公开号 | 公开日
MY172250A|2019-11-19|
CN104115186B|2018-08-28|
EP2819090A4|2015-06-03|
CN104115186A|2014-10-22|
JP5924399B2|2016-05-25|
BR112014020316A2|2021-05-25|
RU2629433C2|2017-08-29|
EP2819090A1|2014-12-31|
MX339625B|2016-06-02|
WO2013125404A1|2013-08-29|
US9558556B2|2017-01-31|
JPWO2013125404A1|2015-07-30|
RU2014138290A|2016-04-10|
US20150302586A1|2015-10-22|
MX2014009974A|2014-09-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JPH0918863A|1995-06-26|1997-01-17|Yazaki Corp|Vehicle rear side monitoring device|
US7418346B2|1997-10-22|2008-08-26|Intelligent Technologies International, Inc.|Collision avoidance methods and systems|
JP2002298299A|2001-03-30|2002-10-11|Toshiba Corp|Driving support device for vehicle|
JP4156214B2|2001-06-13|2008-09-24|株式会社デンソー|Vehicle periphery image processing apparatus and recording medium|
JP2003306102A|2002-04-11|2003-10-28|Toshiba Corp|Passing vehicle detection device, and rear side alarm device for drive supporting system|
JP4887932B2|2006-06-23|2012-02-29|日産自動車株式会社|Road edge recognition device, automobile, and road edge recognition method|
JP2008219063A|2007-02-28|2008-09-18|Sanyo Electric Co Ltd|Apparatus and method for monitoring vehicle's surrounding|
JP2008227646A|2007-03-09|2008-09-25|Clarion Co Ltd|Obstacle detector|
JP2009169776A|2008-01-18|2009-07-30|Hitachi Ltd|Detector|
JP5464885B2|2009-03-31|2014-04-09|日野自動車株式会社|Parallel running warning device, vehicle and program|
RU2533645C2|2010-05-27|2014-11-20|Сергей Анатольевич Королев|Intelligent transport system and method for use thereof|
JP5533323B2|2010-06-21|2014-06-25|日産自動車株式会社|Movement amount estimation apparatus and movement amount estimation method|
JP5251927B2|2010-06-21|2013-07-31|日産自動車株式会社|Moving distance detection device and moving distance detection method|MY160274A|2010-08-19|2017-02-28|Nissan Motor|Three-dimensional object detection device and three-dimensional object detection method|
JP6054777B2|2013-03-01|2016-12-27|日立オートモティブシステムズ株式会社|Stereo camera device|
JP6165085B2|2014-03-07|2017-07-19|日立建機株式会社|Work machine periphery monitoring device|
CN110460792B|2014-08-19|2022-03-08|松下知识产权经营株式会社|Reproducing method and reproducing apparatus|
US10019805B1|2015-09-29|2018-07-10|Waymo Llc|Detecting vehicle movement through wheel movement|
JP2017114155A|2015-12-21|2017-06-29|三菱自動車工業株式会社|Drive support device|
JP6477562B2|2016-03-18|2019-03-06|株式会社デンソー|Information processing device|
US9994151B2|2016-04-12|2018-06-12|Denso International America, Inc.|Methods and systems for blind spot monitoring with adaptive alert zone|
US9931981B2|2016-04-12|2018-04-03|Denso International America, Inc.|Methods and systems for blind spot monitoring with rotatable blind spot sensor|
US9975480B2|2016-04-12|2018-05-22|Denso International America, Inc.|Methods and systems for blind spot monitoring with adaptive alert zone|
US9947226B2|2016-04-12|2018-04-17|Denso International America, Inc.|Methods and systems for blind spot monitoring with dynamic detection range|
US10336326B2|2016-06-24|2019-07-02|Ford Global Technologies, Llc|Lane detection systems and methods|
KR102033883B1|2016-12-30|2019-11-08|현대자동차주식회사|An apparatus and method for implementing LCDAS|
KR20180089200A|2017-01-31|2018-08-08|삼성전자주식회사|Voice input processing method, electronic device and system supporting the same|
CN109344677B|2017-11-07|2021-01-15|长城汽车股份有限公司|Method, device, vehicle and storage medium for recognizing three-dimensional object|
CN109145764B|2018-07-27|2020-10-27|中国铁道科学研究院集团有限公司|Method and device for identifying unaligned sections of multiple groups of detection waveforms of comprehensive detection vehicle|
KR20200042760A|2018-10-16|2020-04-24|삼성전자주식회사|Vehicle localization method and vehicle localization apparatus|
JP6859374B2|2019-01-11|2021-04-14|本田技研工業株式会社|Predictors, predictors, and programs|
US10769457B1|2019-09-26|2020-09-08|Pony Al Inc.|System and method for detecting airborne objects|
CN112991535B|2021-04-19|2021-07-27|中国人民解放军国防科技大学|Three-dimensional space situation representation method and device of height information enhanced ink cartoo map|
法律状态:
2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-12-31| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-08-03| B09A| Decision: intention to grant [chapter 9.1 patent gazette]|
2021-09-14| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 13/02/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
JP2012-037482|2012-02-23|
JP2012037482|2012-02-23|
PCT/JP2013/053328|WO2013125404A1|2012-02-23|2013-02-13|Three-dimensional object detection device|
[返回顶部]